Google Research in cooperation with MIT Media Lab published new research “Toward More Effective Human Evaluation for Machine Translation”
Sept 1, 2021 – In today’s data-driven world, high-quality bilingual data is a necessity for training better translation models as well as for capturing domain-specific knowledge
Automatic MT evaluation metrics are indispensable for MT research. Augmented metrics such as hLEPOR include broader evaluation factors (recall and position difference penalty)…
In the middle of the May holidays, on May 6, Slator, the industry’s largest news portal, published an overview post about a Google research team’s report, the significance of which the industry still needs to understand.
In the previous article we mentioned the research by Google Research team entitled “Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation,”
Humankind, in the middle of the past century, discovered nuclear power. People were trigger-happy to create a bomb and build nuclear power plants despite the lack of real knowledge and understanding…
During the model training process, a standard practice is to divide the data set into 90% for training and 10% for testing so that one can train the model on the 90% and test it on the 10%.
It was always a mystery to us why BLEU is the most widespread metric, given that hLEPOR is a more advanced solution.