🤖 AI Summary
A developer tested Microsoft Copilot by prompting it to generate complete C++ video games: first a working 2D “train defender” using SDL for I/O, Box2D for physics and ENTT for the ECS, then an attempted 3D first‑person shooter using SDL, OpenGL (and GLAD), Bullet Physics and ENTT. Copilot produced surprisingly large, nontrivial codebases — single-file C++ programs, shaders when asked, and even some rendering and damage simulation — but never produced a fully working 3D game. The author used careful prompt engineering (e.g., “output a single C++ file,” “do not generate a build system,” “include shaders,” “integrate Box2D with ENTT”) to steer results and discovered practical limits of the tool.
Technically important takeaways: Copilot has output-size limits and will “drip” code across prompts; it hallucinates useful artifacts (invalid OpenGameArt URLs); it sometimes prefers verbose or obsolete implementations (OpenGL immediate mode) unless explicitly forbidden; and it often generates incomplete build/compilation steps. The experiment highlights Copilot’s strength as a rapid prototyping and learning aid but underscores that skilled developers are required to validate, refactor, secure and integrate generated code. For the AI/ML community this reinforces that prompt design, verification, dependency and API vetting, and security review remain essential when using LLMs to generate complex software.
Loading comments...
login to comment
loading comments...
no comments yet