🤖 AI Summary
Researchers at BetterUp Labs and Stanford Social Media Lab have coined “workslop” to describe AI-generated outputs that “masquerade as good work” but lack the substance, context, or completeness to actually advance tasks. In a Harvard Business Review piece they argue workslop is widespread — an ongoing survey of 1,150 U.S. full-time employees found 40% had received workslop in the past month — and may help explain why 95% of organizations that have tried AI report no return on investment. Rather than saving time, workslop often shifts the burden downstream, forcing coworkers to interpret, correct, or redo AI-produced content.
For the AI/ML community and workplace leaders, the term crystallizes key operational risks: poor prompts, insufficient grounding data, lack of human-in-the-loop checks, and absent norms can produce plausible but low-value outputs. The researchers recommend modeling purposeful AI use, setting clear guardrails, and defining acceptable norms — measures that imply investments in prompt engineering best practices, verification workflows, role-based responsibility for outputs, and tooling to surface AI confidence and provenance. Addressing workslop is therefore both a cultural and technical priority for realizing real productivity gains from generative systems.
Loading comments...
login to comment
loading comments...
no comments yet