Coding big projects with AI

A common complaint about people who want to do “vibe coding” — which is basically using AI to write code interactively — is that once the project reaches a certain level of complexity, usually over 1,000 lines of code or so, the AI tool starts to struggle. At that point, the token context window can be exceeded, the AI loses track of the broader architecture, and new bugs start appearing faster than they can be fixed. Eventually, the project stalls.

I’ve been exploring ways to address this problem, and I think I’ve found a solid workflow.

In my case, I’ve been building with React recently, but this method works for just about any tech stack.

Step 1 – Start small inside ChatGPT Canvas

I build the early parts of the app directly in ChatGPT Canvas (or a similar AI coding environment). Once I’m happy with the initial progress, I export the project as a ZIP file and move it to my PC.

Step 2 – Set up Git and Repo Utilities

On my PC, I have Git installed so I can track changes and apply patches cleanly. I also use a small Python utility called Repomix. Repomix generates a single .txt file containing all files in a given folder (essentially a flattened codebase snapshot). You can configure .repomixignore rules to skip certain files (like /node_modules, .env, or large assets). I’ve even automated the process with a .bat file so I can refresh the snapshot with a double-click.

Step 3 – Keeping the AI in sync

This Repomix output file can be dragged and dropped into ChatGPT whenever I need to give it a full, current view of my codebase. I don’t do this after every change — usually only every 4–5 major change requests — but it’s an easy way to avoid the “AI forgetting” problem on larger projects.

Step 4 – Using Diff Files for Clean Changes

When I ask ChatGPT to make a change, I request a unified diff (.diff or .patch) file instead of a full file overwrite. That way, I can apply it directly with git apply patch.diff, preserving my commit history and avoiding accidental overwrites.

Step 5 – Deploying to Production

Once the local build looks good, I use an FTP sync tool to push changes to my server. For React, I run npm run build or yarn build on the server (or locally before upload) to generate the optimized /build folder for production.

With this method, I do minimal manual editing. My role is mainly testing, applying diff patches, and keeping the AI aware of the code structure. I spend far less time reverse-engineering AI mistakes caused by file size limits or fragmented context.

A Note on AI Capabilities

ChatGPT-5 now has a much larger context window, which helps a lot with mid-size projects. Google Gemini also has a large context capacity and can output usable diff files. Still, even with these advances, managing your codebase so the AI always has the right context is key to avoiding “context drift” and stalled projects.

If you’d like, I can also create a shorter, more concise version for readers who might not want all the step-by-step detail. That would make it feel more like a tech blog highlight piece.

Similar Posts