• 2022/07/09
  • -

AI and Quality Measurement are equivalent problems

In recent public discourse, claims were made that the need no longer exists to measure translation quality.

  • 2022/05/22
  • -

Paralela is featured in 337th Issue of Toolbox Journal

Jost Zetzsche has featured Paralela aligner in recent issue of Toolbox Journal. The article reads:

New, interesting research of Google Research and MIT on more effective human evaluation for machine translation, cites our work.

Google Research in cooperation with MIT Media Lab published new research “Toward More Effective Human Evaluation for Machine Translation”

  • 2021/09/02
  • -

Logrus Global releases Paralela, an AI-based aligner, as part of Logrus Global Localization Cloud

Sept 1, 2021 – In today’s data-driven world, high-quality bilingual data is a necessity for training better translation models as well as for capturing domain-specific knowledge

CushLEPOR uses LABSE distilled knowledge to improve correlation with human translations

Automatic MT evaluation metrics are indispensable for MT research. Augmented metrics such as hLEPOR include broader evaluation factors (recall and position difference penalty)…

The elephant is in the room, or MT is still far from human parity

In the middle of the May holidays, on May 6, Slator, the industry’s largest news portal, published an overview post about a Google research team’s report, the significance of which the industry still needs to understand.

Metrics based on embedding do not reflect a translation’s quality–and this is a far reaching fact

In the previous article we mentioned the research by Google Research team entitled “Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation,”

Looking inside AI when it’s all around us… if that’s true at all

Humankind, in the middle of the past century, discovered nuclear power. People were trigger-happy to create a bomb and build nuclear power plants despite the lack of real knowledge and understanding…