Segmentation and Translation Memory
To get the broken sentences in the new files to find their equivalents (or even just fuzzy matches) in translation memory we have three options:
Also, I don't want the tail to wag the dog. The money spent in translating false positives may be less than the time and money spent in fixing the problem.
- Modify the Perl scripts that extract the text from the header files into the HTML, so that the scripts no longer introduce the hard returns.
- Massage the HTML files themselves and replace the hard returns with spaces.
- Tune the segmentation rules in Trados such that it ignores the hard returns (but only the ones we want it to ignore) and doesn't consider the segment finished until it gets to a hard stop/period.
Also, I don't want the tail to wag the dog. The money spent in translating false positives may be less than the time and money spent in fixing the problem.
Labels: HTML localization, Localization, localization project, Trados, translation memory
0 Comments:
Post a Comment
<< Home