🤖 AI Summary
Inside Tesla’s glass-walled lab, dozens of hired “data collectors” run, dance and repeat mundane actions—wiping a table, lifting a cup, pulling a curtain—hundreds of times while wearing helmet-mounted cameras and a heavy 30–40 lb backpack so Tesla can teach its Optimus humanoid to move like a person. Business Insider’s reporting describes eight-hour shifts scored for data quality (four hours of usable footage expected), detailed manuals, peer checks and AI-generated prompts; tasks range from baby-style ring-sorting to kung fu demos and even uncomfortable or absurd requests. Elon Musk has framed Optimus as a strategic priority—forecasting up to 1 million units/year and claiming it could become ~80% of Tesla’s value—yet the work behind the scenes is labor-intensive, physically demanding and closely managed.
Technically, Tesla has shifted from teleoperation + full motion-capture suits toward camera-only and multi-view “tower” recording to scale collection, while still using haptic gloves and gantries for certain tests. That pivot highlights key trade-offs: mocap gives precise kinematics but is slow and sickness-inducing; camera-only scales faster but may lose fine-grained hand/pose fidelity. Repeated human demonstrations, AI prompts and multi-angle data help train policies, but frequent robot tumbles, injuries to workers and limited transparency into real generalization show the distance between slick demos and robust autonomy. The story underscores an industry-wide reality: current humanoid progress often rests on intensive human labor and curated demos, not yet generalizable, safe, fully autonomous agents.
Loading comments...
login to comment
loading comments...
no comments yet