■ 08.2021 ■ Optimizing Word Alignments with Better Subword Tokenization ○ NLP (Paper)
○ Proceedings of Machine Translation Summit XVIII: Research Track, Association for Machine Translation in the Americas
Word alignments identify translational correspondences between words in a parallel sentence pair and are used, for example, to train statistical machine translation and learn bilingual dictionaries or to perform the quality estimation. Subword tokenization has become a standard preprocessing step for a large number of applications and notably for state-of-the-art open vocabulary machine translation systems. In this paper and we thoroughly study how this preprocessing step interacts with the word alignment task and propose several tokenization strategies to obtain well-segmented parallel corpora. Using these new techniques and we were able to improve baseline word-based alignment models for six language pairs.
See publication
Word alignments identify translational correspondences between words in a parallel sentence pair and are used, for example, to train statistical machine translation and learn bilingual dictionaries or to perform the quality estimation. Subword tokenization has become a standard preprocessing step for a large number of applications and notably for state-of-the-art open vocabulary machine translation systems. In this paper and we thoroughly study how this preprocessing step interacts with the word alignment task and propose several tokenization strategies to obtain well-segmented parallel corpora. Using these new techniques and we were able to improve baseline word-based alignment models for six language pairs.
See publication