I use all of the AIs together

Make them work together. If one is good at planning, use it for planning. If one is state of the art at coding, use that one for coding. Have them share the context. Have another one edit the context, and preparing the next prompt.

I prompted deepseek r1 to come up with a plan for what I wanted, with the first draft of code. Whatever it gave me had problems, so I passed it through claude. Being able to make these one off programs is *not normal*. I write and delete more code in a day than I used to write in an entire month. I've built my own tools to make this comfortable. In nvim, I always have an LLM available.

Deepseek down? I just hit another hotkey - and get the next best thing.

You should think carefully about how you interface with the AIs. Each one will have its strengths; do you need an LPU super-fast token stream? Or do you want something deliberate and powerful? My interface is naive - but I plan to improve it soon. Right now, it's a long context, and only ever one turn. The previous chat history simply makes it into the user chat message. It's basically just a forever markdown file - where everything preceding the cursor is the prompt.

Also I've set a hotkey to jump to the previous code block - so I can copy it fast. These things matter. The more APM you can get with an LLM, the better. The next improvements I plan is to automatically create a useful summary of the previous context - basically, after each prompt, once an 'accept' occurs, the previous context gets removed and turned into some basic history.

There are a lot of different ways to do context control - but I want to have ultimate control over what the context contains. The prompts - they matter. A forever-markdown has been working well, so why change it?

Right now, my next bottleneck is actually applying the diffs and then running the code. It's fast enough with my term/wm hotkeys.. but I need to figure out how to program LLM loops that I can orchestrate and observe myself (try until build pass). We're getting close to the point where LLMs can do simple tasks themselves, run it, test it and self correct! Can I manage two AIs writing two separate features at the same time? Thanks for reading

Previous
Previous

I learn faster with llms

Next
Next

I hate OpenAi