🤖 AI Summary
More than 200 contract workers who rated and edited outputs for Google’s AI products—known as “super raters” employed by Hitachi-owned GlobalLogic and subcontractors—were abruptly laid off in at least two rounds last month, workers told WIRED. These contractors, many with advanced degrees and backgrounds as writers or teachers, evaluated and rewrote responses for Gemini, Google’s AI Overviews (search generative experience/Magi) and other features. Staff allege the cuts followed organizing around pay parity, job security and workplace policies: efforts to unionize, shared pay surveys and complaints about forced return-to-office rules, five-minute task timers, and suppressed social channels. Two unfair‑dismissal complaints have been filed with the NLRB, and internal documents reportedly show GlobalLogic using human rater data to train automated systems that could supplant those raters.
The story matters because these workers are a critical human feedback loop for grounding, source use, safety and quality in deployed models. Large-scale layoffs, pay stratification ($28–$32/hr for some hires vs $18–$22/hr for third‑party contractors), and pressure to hit speed metrics risk degrading annotation quality and increasing brittleness or hallucinations in models. It also underscores structural risks in AI supply chains: outsourced, precarious labor doing high-skill evaluation work, potential employer retaliation against organizing, and a push to automate the very oversight roles that help govern model behavior—raising questions about auditability, accountability, and the sustainability of current model‑training practices.
Loading comments...
login to comment
loading comments...
no comments yet