🤖 AI Summary
Meta told employees it will make "AI-driven impact" a core performance expectation beginning in 2026, and will judge workers on how they use AI to deliver results and build tools that materially boost productivity. For 2025, individual AI usage metrics won’t be part of formal reviews, but employees are advised to include AI-fueled wins in self-reviews and Meta will reward those who make “exceptional AI-driven impact.” The company is also launching an "AI Performance Assistant" for this year’s review cycle and explicitly allows use of its internal Metamate bot and Google’s Gemini to draft review content. This follows other moves — AI-enabled coding interviews, an internal “Level Up” game — designed to accelerate AI adoption across teams.
For the AI/ML community, Meta’s policy signals a shift from optional experimentation to measurable, incentivized AI productization inside big tech. Expect rising demand for internal tooling, evaluation frameworks, and engineering that turns models into high-leverage features rather than prototypes. The move heightens incentives to instrument and demonstrate measurable impact from AI (which can drive better tooling and standards), but it also raises questions about what metrics will count, how impact will be attributed, and how to avoid gaming evaluations. Overall, Meta’s step crystallizes a broader industry trend: organizations are formalizing AI as a core competency tied directly to performance and career progression.
Loading comments...
login to comment
loading comments...
no comments yet