🤖 AI Summary
            An investigation and a CBS 60 Minutes report reveal that thousands of low-paid contractors in places like Nairobi are doing the unseen, often traumatic work that makes modern conversational AIs possible — watching graphic content, moderating outputs, and labeling examples for model training — sometimes for as little as $1.50 an hour. Personal accounts describe severe psychological harm and little support, while opaque contracting and nondisclosure practices keep these human workflows hidden from users and regulators. The story reframes the “cost” of services like ChatGPT: each query doesn’t just consume compute, it depends on a distributed human labor pipeline to filter, rate, and shape model behavior.
For the AI/ML community this matters technically and ethically. Tasks described are core to supervised fine-tuning and reinforcement learning from human feedback (RLHF) — annotators craft safety labels, score responses, and perform red-teaming that directly affects model alignment, hallucination rates, and bias mitigation. Low pay, high turnover, and trauma risk can degrade label quality and introduce systematic biases; opaque sourcing undermines dataset provenance and reproducibility. The findings strengthen calls for transparency about human-in-the-loop processes, better worker protections and mental-health support, audit trails for training data, and consideration of how scaling LLM services externalizes social and ethical costs.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet