Localization beyond the Translation Memory tools
Still reeling from the preliminary wordcount on the HTML pages in the API Ref...
While it's true that 3280 pages is not a phenomenal amount, it's too onerous a number for review and vetting by a single human, particularly when I would be that human.
I've dusted off BeyondCompare for some preliminary testing. I think I can use it to pour these pages into a few different buckets:
1) New, orphan pages - newly written content
2) Pages which have not changed at all since the last time I handed them off for localization
3) Pages which have changed immaterially (datestamp in footer, etc.) since last handoff
4) Pages which have changed for reasons that won't matter to translators (format changes, cleaned up typo's in English)
The question is: Will this lead to a higher or lower margin of error than I get when I simply throw all 3280 pages into Trados?
This is the stuff localization consulting is made of.
While it's true that 3280 pages is not a phenomenal amount, it's too onerous a number for review and vetting by a single human, particularly when I would be that human.
I've dusted off BeyondCompare for some preliminary testing. I think I can use it to pour these pages into a few different buckets:
1) New, orphan pages - newly written content
2) Pages which have not changed at all since the last time I handed them off for localization
3) Pages which have changed immaterially (datestamp in footer, etc.) since last handoff
4) Pages which have changed for reasons that won't matter to translators (format changes, cleaned up typo's in English)
The question is: Will this lead to a higher or lower margin of error than I get when I simply throw all 3280 pages into Trados?
This is the stuff localization consulting is made of.
Labels: translation memory
0 Comments:
Post a Comment
<< Home