• 2021/09/02
  • -

Logrus Global releases Paralela, an AI-based aligner, as part of Logrus Global Localization Cloud

Sept 1, 2021 – In today’s data-driven world, high-quality bilingual data is a necessity for training better translation models as well as for capturing domain-specific knowledge

CushLEPOR uses LABSE distilled knowledge to improve correlation with human translations

Automatic MT evaluation metrics are indispensable for MT research. Augmented metrics such as hLEPOR include broader evaluation factors (recall and position difference penalty)…

The elephant is in the room, or MT is still far from human parity

In the middle of the May holidays, on May 6, Slator, the industry’s largest news portal, published an overview post about a Google research team’s report, the significance of which the industry still needs to understand.

Metrics based on embedding do not reflect a translation’s quality–and this is a far reaching fact

In the previous article we mentioned the research by Google Research team entitled “Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation,”

Looking inside AI when it’s all around us… if that’s true at all

Humankind, in the middle of the past century, discovered nuclear power. People were trigger-happy to create a bomb and build nuclear power plants despite the lack of real knowledge and understanding…

Why the BLEU score is usually inflated

During the model training process, a standard practice is to divide the data set into 90% for training and 10% for testing so that one can train the model on the 90% and test it on the 10%.

We have implemented hLEPOR metric as a public Python library, for the first time ever

It was always a mystery to us why BLEU is the most widespread metric, given that hLEPOR is a more advanced solution.