Meet the AI workers who tell their friends and family to stay away from AI (www.theguardian.com)

🤖 AI Summary
A cadre of frontline AI raters—thousands of contract workers who label, fact-check and rate outputs from chatbots and image generators—are increasingly advising friends and family to avoid generative AI. Workers interviewed recount moments that shattered their confidence in models (for example mislabeling a tweet containing a racial slur), widespread time pressure, vague instructions, and tasks asking non-experts to vet medical or sensitive content. Several said they now ban chatbots at home and try to teach loved ones to probe models’ limits, while companies such as Amazon and Google emphasize that raters choose tasks and that ratings are only one signal among many. The trend matters because it signals systemic gaps between human oversight and deployed models: rushed tooling, inconsistent training data, and ignored rater feedback can cement biases and hallucinations into consumer-facing systems. Audits cited in the piece show chatbots reduced “I don’t know” responses while increasing the repetition of false claims (non-response rates fell from 31% to 0% while false repeats rose from 18% to 35%), and raters report seeing “garbage in, garbage out” data fed into training. The story underscores technical implications—poor-quality labels, insufficient validation, and incentives to ship fast—which raise risks for misinformation, ethical harms, and environmental and labor costs unless companies strengthen rater support, transparency and model evaluation.
Loading comments...
loading comments...