Bots Write Bad Terraform and It's All Your Fault (www.proactiveops.io)

🤖 AI Summary
LLMs are producing low-quality Terraform because much of the public training data is itself poor — creating a feedback loop where bad machine-generated IaC gets published and then retrains models ("model collapse"). The author highlights common anti-patterns LLMs output: redundant resource names (e.g., aws_iam_role.my_role), comment bloat, dumping everything into main.tf, using deprecated S3 syntax despite AWS provider v4+ changes, overusing jsonencode() for policies instead of data.aws_iam_policy_document, and sprawling "Swiss Army" modules that expose every property as variables. This matters because fragile, verbose, or deprecated Terraform increases drift, downtime, and future refactors; it also poisons training corpora, worsening automated generation over time. Concrete fixes: stop publishing unreviewed bot output, enforce human peer review (lint + human), adopt linters and rule sets (the author’s “Dave says” TFLint ruleset), and document simple, opinionated standards (use name patterns like <app>-<env>-<service>-<qualifier>, use name_prefix with create_before_destroy for replaceable resources, split files: versions.tf, variables.tf, outputs.tf, s3.tf, network.tf). Plan before you prompt: craft detailed prompts, share guidance docs with coding assistants (AGENTS.md-style), and prefer aws_iam_policy_document data sources for IAM policies. These practical steps will improve immediate reliability and the long-term quality of LLM-generated Terraform.
Loading comments...
loading comments...