🤖 AI Summary
The Fedora Council has begun formalizing an AI-assisted contributions policy after a contentious 2024 community survey and debate; a draft was published Sept. 25. The survey itself drew criticism for biased framing, sampling issues (most respondents were users, not contributors), and opaque reporting, so community sentiment was muddled—generally cautious or negative about embedding AI into core development but more open to limited uses like testing or infrastructure. The council’s process has revealed a sharp split between those wanting strict bans (e.g., Gentoo-style prohibition of AI-generated contributions) and those favoring cautious experimentation.
The draft policy takes a middle path: it encourages using AI assistants but makes the human contributor the accountable author and asks (but does not require) disclosure when AI “significantly assisted” work. AI can aid reviews and tasks (translation, note-taking, spam filtering), but must not make final acceptance decisions or handle project governance (code-of-conduct enforcement, talk selection). Any OS-level or shipped AI features must be opt-in and avoid sending data to remote services by default. Crucially, the draft bars “aggressive scraping” of Fedora project data for model training and highlights unresolved copyright/licensing risks from LLM ingestion of public code. The policy, if adopted, would shape contributor workflows, data access for model builders, and governance precedents across open-source projects.
Loading comments...
login to comment
loading comments...
no comments yet