Polish to be the most effective language for prompting AI, new study reveals (www.euronews.com)

🤖 AI Summary
Researchers at the University of Maryland and Microsoft ran a cross‑lingual prompting study on five major LLM families (OpenAI, Google Gemini, Qwen, Llama and DeepSeek), feeding identical long‑text inputs in 26 languages to measure task completion accuracy. The surprising headline: Polish came out on top with an average accuracy of 88%, followed by French (87%), Italian (86%) and Spanish (85%); English ranked sixth at 83.9%. Chinese ranked near the bottom (fourth from last). The experiment highlights that high model performance in a language does not simply track the volume of training data—Polish performed well despite far less available corpora than English or Chinese. The finding matters for prompt engineering, benchmarking and deployment: prompting effectiveness is language‑dependent, so evaluations and best practices built around English can mislead. Possible technical roots include tokenization, morphological richness, syntax that reduces ambiguity for models, or gaps in multilingual pretraining and evaluation sets. Practically, this suggests teams should test prompts across target languages, expand multilingual benchmarks, and consider language‑specific prompt strategies and safety checks when deploying LLMs internationally.
Loading comments...
loading comments...