[RFC] LLVM AI tool policy: start small, no slop (discourse.llvm.org)

🤖 AI Summary
LLVM maintainers published a draft "AI Tool Use Policy" to curb low-quality, unvetted AI-generated contributions ("slop") and give maintainers clear guidance for handling them. The policy is liberal about tool use but stresses that humans remain fully accountable: AI output is a suggestion that must be reviewed, tested, and understood before submission. Key rules include transparency (e.g., an Assisted-by: commit trailer for significant AI help), limits on reviewer automation (AI may assist but cannot make final merge decisions), and an explicit suggestion that new contributors keep initial code changes small—proposing an objective threshold of 150 lines of added non-test code (acknowledged as arbitrary). The policy covers code, RFCs, issues, comments, and other contributions, warns against "extractive" patches that impose disproportionate review cost, and requires copyright diligence for any regenerated material. Significance: this sets a practical, enforceable norm for a major OSS ecosystem—balancing openness to AI tooling with protections for maintainer time, code quality, and legal risk. Technically, it formalizes reviewer workflows (labels, canned responses, escalation to moderation), discourages large AI-produced diffs that increase review burden, and reinforces human-in-the-loop validation for correctness and licensing. The draft borrows from Fedora’s proposal and aims to influence contributor behavior, onboarding, and moderation practices across compiler and wider open-source communities as AI-assisted development proliferates.
Loading comments...
loading comments...