🤖 AI Summary
Kagi Search today launched SlopStop, a community-driven system that detects, labels and downranks “AI slop” — low-value or deceptive AI-generated text, images and video that seek to manipulate rankings or attention. Users can flag results via a shield icon; Kagi verifies reports with internal signals and displays a real-time “AI slop” score in search results. Domains that primarily publish AI-generated content will be downranked site-wide, mixed domains will have individual pages flagged, and confirmed AI images/videos are labeled and can be filtered out entirely. Kagi pairs SlopStop with its Small Web initiative to whitelist and amplify verified human creators, aiming to prioritize trustworthy, value-driven content over content farms.
For the AI/ML community, SlopStop is notable both as an operational countermeasure and as a data resource: Kagi will build what it calls the largest AI-slop dataset using in-house detectors plus crowdsourced reports, then use it to train detection models and reduce hallucinations and misinformation (which Kagi cites as contributing 30–41% of fail responses in other chatbots). The human-in-the-loop verification, domain-vs-page labeling policy, media filtering options, and planned dataset release make this a practical testbed for detection techniques, adversarial resistance research, and content-moderation workflows — and signal a growing arms race between generative-content producers and platform-level defenses.
Loading comments...
login to comment
loading comments...
no comments yet