🤖 AI Summary
Faced with a last-minute Nix Steering Committee election that required ranking all 24 candidates, the author used an LLM to save time rather than read ~188,000 words of candidate statements (~7,800 words each). They encoded personal priorities into a “value set,” batched the markdown candidate files (two at a time to respect token limits), had the model summarize each candidate relative to those values into a single summary file, then asked the model to produce a ranked list. The author spot-checked several items and compared the LLM’s picks to a few candidates they already knew before submitting the ballot, treating the model’s output as an initial, verifiable draft rather than an unexamined decision.
For the AI/ML community this is a concrete example of LLMs as productivity amplifiers for high-information civic and OSS tasks: practical workflow choices (data in markdown, batching to handle tokens, value-conditioned prompts, and lightweight human verification) let users scale decision-making without reading everything. It also highlights important trade-offs—value alignment, bias amplification, auditability, and the need for transparent pipelines—so LLMs are best used as assistants that produce auditable summaries and rankings, not as sole arbiters of governance choices.
Loading comments...
login to comment
loading comments...
no comments yet