Your vibe coded slop PR is not welcome (samsaffron.com)

🤖 AI Summary
Open-source maintainers at Discourse warn that AI coding assistants (Copilot, Codex, Claude, etc.) are flooding projects with quick, machine-generated “vibe coded” prototypes submitted as pull requests, creating a new maintenance bottleneck. Generating large change sets is now cheap, but human review isn’t: these AI drafts often lack tests, contain security flaws, and would introduce technical debt if merged. The team recommends a binary contribution model—keep prototypes as branches, demo videos, or forum posts, and reserve PRs for “ready-to-review” changes that humans have vetted, tested, and formally vouch for. They point to research (Veracode: only ~55% of generated tasks produced secure code) and the demo-to-product gap noted by Andrej Karpathy to underline the real risk of shipping unreviewed AI output. Practically, maintainers should timebox initial reviews, close prototype PRs that masquerade as review-ready, and build clear internal and public etiquette: label AI-assisted work, share prototypes via links or videos, and only submit PRs you stand behind. The piece frames LLMs as “alien intelligence”—powerful but inconsistent—and urges projects to set explicit contribution rules (some projects already ban AI-generated code for licensing/security reasons). The upshot: embrace AI for rapid prototyping, but protect maintainer time and code quality by insisting that any merged change be human-reviewed, tested, and owned.
Loading comments...
loading comments...