Prompt Strategies for Terraform Test Generation (masterpoint.io)

🤖 AI Summary
Masterpoint.io walked through iterating a durable LLM prompt to automatically generate Terraform tests for child modules and published the final prompt in their shared-prompts GitHub repo. They experimented with Cursor (IDE agent mode) and Claude Code, found that model quality and prompt specificity mattered a lot, and settled on Sonnet-4 + a refined prompt that reliably created a /tests directory, sensibly named test files, and starter tests that cover basics, variable validation, happy paths and edge cases. Early “v0” prompts produced broken scaffolding and poor file placement, while Claude’s Sonnet-4 produced a much better initial layout and test logic; reusing the refined prompt in Cursor (v2) reproduced that quality with faster iteration. The post’s practical takeaways: define acceptance criteria (basic TF/OpenTofu tests, input validation, happy paths, edge cases), explicitly instruct the model how to mock providers, structure shared test inputs, and test locals without exposing outputs. Expect non-determinism—rerun to get multiple candidate tests—and plan human review and refactoring to avoid superficial or incorrect coverage. The refined prompt and Cursor rule (tf-testing-child-module.mdc) are available for teams to try, offering a replicable way to speed IaC test scaffolding while preserving engineering rigor.
Loading comments...
loading comments...