🤖 AI Summary
OpenAI, the creator of ChatGPT, has come under scrutiny for employing outsourced Kenyan workers at rates below $2 per hour to help mitigate the AI's propensity for generating toxic content. While ChatGPT has been celebrated for its linguistic prowess, the underlying issues of bias and harmful language in its training data posed significant challenges. To address this, OpenAI leveraged AI-powered tools similar to those used by social media giants, involving humans to label toxic examples like hate speech and violence, thus creating a framework to filter such content from ChatGPT’s outputs.
This practice sheds light on the often hidden labor force that supports AI development, particularly in the Global South. The conditions faced by these data labelers highlight serious ethical concerns regarding automation's reliance on precarious labor, as many workers reported mental health struggles from engaging with distressing material and receiving minimal compensation. OpenAI's actions provoke a broader discussion on the responsibility of tech companies to ensure fair labor practices and mental health support for those contributing to AI safety, as the field grows more lucrative and influential in various sectors.
Loading comments...
login to comment
loading comments...
no comments yet