LLMs and code generation are extremely powerful tools when working as a solo game developer. I made two attempts to build a game leveraging AI code generation, specifically GPT-5.3 Codex. The first attempt led to frustration and feeling like I lost creative control of the project. In a second attempt I shifted my mindset with how I used the AI code generation and not only built a more stable project but also maintained creative control of my game.
The Blind leading the blind
Both attempts at building a game I was trying to build the same idea using the Godot game engine. When I started I was not using any AI and then I tried to bring it in later. I chose to reach for AI because I wanted to add multiplayer to the game and I had been struggling for a couple of weeks. The ai was able to generate a lot of code quickly but the project wouldn’t run. 2 things were important here.
- I didn’t know how to implement multiplayer. How could I know the AI was doing it correctly?
- The project was not built with multiplayer in mind from the start so figuring out what was server-side and what was client-side retroactively was difficult for both myself and the agent.
Those two things compounded to create frustration and a deadlock where I didn’t know if I should push forward with what the AI gave me or go back to the start of the project.
Giving myself and the agent a better start point
I took a break from the project and later came back. For the second attempt I knew I wanted to multiplayer game and made that the first feature I implemented. I started by going through a tutorial to make a basic game that could host a lobby with multiple players. This gave a stronger starting point for both myself and the agent. Now each time I prompted the agent I could tell it what I wanted to be server side and what I wanted to be client-side, because I had a project architecture that was designed from the beginning to handle that distinction.
The code the AI generated became easier to understand because I was working within the bounds of the system rather than against what I had already built.
Let it build the how, tell it the what
Another mistake I made with the first attempt was involving the AI in the initial design documents for the game. I was trying some spec-driven development and used the agent to help generate the specs. This resulted in the AI pulling ideas from how other games in the same genre. This on its own was fine but they were not what I had in mind. So I spent more time telling the AI what _not_ to build instead of having it generate code to do the things I wanted to do.
The second pass I used prototyping to drive the implementation. I would ask the agent to generate a specific feature. Then test that feature in game. Then iterate on the feature from there. Using prototypes instead of specs felt like I could leverage the AI’s ability to generate a lot of content quickly without having to constantly keep it on the rails. Playing the game also gave a nice break from reviewing AI generated text and gave me more opportunity to question the AIs implementation looking for exploit opportunities the way a player would instead of trying to forsee the risks from purely the git diff.
Question Everything
The process of writing code by hand involves asking a lot of questions throughout the process. Small questions like variable names as well as performance and security implications of changes. When the bot is writing the code these questions still need to be answered but the developer needs to explicitly ask them instead of waiting to consider it naturally during development. Build opportunities to remember these questions during the implementation workflow. As mentioned previously I used the time playing and testing the game to think about risks and how to address them.
Three lessons to keep creative control
The first lesson is that even when using AI assistants the project still needs an architecture that supports the desired features. AI will generate code to try and force a feature into a project but learning from that code is difficult and there is no way to know how it will scale with new features.
The second is that AI is really strong at building prototypes. Leverage that to implement a feature and then iterate on in instead of trying to design the entire thing upfront.
I say again, QUESTION EVERYTHING. Proactively look for exploits, performance bottlenecks and duplicate code that can be abstracted. If the AI is generating the code the developer needs to become a software architect, exploratory tester and product owner to keep the technical and creative control of the project.