Judge, don’t generate: AI beats MT at QA feat. Marco Baglioni
Manage episode 524722666 series 3457849
MT can draft fast, but it often hides meaning errors behind fluent prose.
In this session, Marco Baglioni demonstrates how modern LLMs, when used as judges rather than generators, deliver segment-level QA that identifies semantic shifts, enforces terminology (including inflections and multi-word terms), and reduces false positives on source/target inconsistencies.
We’ll cover a practical, CAT/TMS-friendly workflow: segment the text, preprocess it for context, run criteria-first prompts, and provide feedback on annotated XLIFF for focused post-editing.
Expect a pragmatic playbook: real examples, and what actually moves MTPE time.
About Marco: is the CEO & Co-Founder of LanguageCheck.ai, and Lecturer in Engineering Management and in Interpreting and Translation (DIT) at the University of Bologna
About Nimdzi Live: There is a shadow industry driving the growth of ALL global brands: Localization. Let’s talk globalization, localization, translation, interpretation, language, and culture, with an emphasis on how it affects your business, whether you have a scrappy start-up or are working in a top global brand.
Would you like to be a guest on Nimdzi Live? Or you know somebody who should? Email [email protected] or reach out to [email protected] so we can coordinate!
100 episodes