Our AI policy vs. code of conduct and vs. reality (discourse.llvm.org)

🤖 AI Summary
A long-standing Clang reviewer argues the project should ban AI-generated contributions outright, claiming they’ve steadily degraded the code-review process and newcomer experience. The reviewer — one of the largest contributors by review volume — says AI-produced patches are increasing, require far more reviewer time, and behave differently than human beginners: authors don’t understand their patches, issues aren’t the kind that reveal other bugs, and responses to reviewer feedback are poorer. That extra workload forces reviewers to prioritize patches closer to completion, which groups genuine new contributors together with AI-driven ones and results in both getting worse, sparser reviews. Any ban would necessarily be enforced as an honor-system, but the writer insists it’s needed to protect what they see as the greater value: human-led onboarding and iterative learning. The post frames this as a stark policy choice for open-source maintainers: either stop accepting AI-assisted submissions or accept that review bandwidth will shrink and many real newcomers won’t get their patches accepted. For the AI/ML community this raises practical questions about detection, provenance, and tooling: how to identify AI-generated patches reliably, how to improve AI code quality/traceability, and whether models can be adapted to produce reviewable output. It’s a cautionary tale about deployment impact — not just model performance, but community cost and maintenance overhead.
Loading comments...
loading comments...