Monday, March 09, 2026

Coaxing and Cajoling

I learned something interesting recently: you can coax and cajole AI models into doing work they were initially reluctant to touch. Sounds like dealing with a particularly stubborn intern, doesn't it? Except the intern has access to your entire code base and will judge your variable naming choices silently but permanently.

 
If the volume of work is huge or some UI intervention is required, AI models refuse to automate. I found myself porting an old project to a new version. The new version is so new that it has several major breaking changes requiring rewrites of major portions of code. It's like moving houses where half your furniture suddenly refuses to fit through the door anymore.

 I used CoPilot in Visual Studio to automate the migration as an exercise. CoPilot refused to automate certain parts, stating there were 700+ lines of field definitions across several files. It probably worried about my token usage, maybe? Or perhaps it was having a momentary existential crisis about whether these array definitions constituted poetry or just technical debt in disguise.

 
I tried Socratic prompting instead: asking how common these lines of code (mostly PHP array data) were, and how someone would extract and convert this cleverly with little or no manual intervention. Suddenly, CoPilot could see a way to automate this by writing a Python script to parse and convert the field definitions, batch process them, then verify the conversion automatically.
I was agape but went "D'uh, that's what I meant."
Another time CoPilot refused to automate because it involved straightforward UI work (not tedious coding). Which means I, as a user, must do it, not the AI agent. How rude and insubordinate. But I found a way around doing the actual work again. The AI had clearly read too many "AI will never replace human creativity" articles and was now overcompensating with unnecessary caution.

The Socratic method came to the rescue once more. I simply asked: when a user exports data from the UI in the old version, is the data being synthesized by code or coming from a DB table field? And does re-creating this via UI in the new version ultimately create DB table rows and fields computable/predictable from code?

It went into professor-mode and explained how the data was stored in the old version versus how the recreated UI would be stored in the new version. Just like that, CoPilot found a way to automate the entire process. It said, and I quote: "Can We Automate This? YES!" Then it created a new set of Python scripts to extract, convert, and verify everything. Meanwhile, I watched a YouTube video and came back to benevolently give it permission to run the scripts. Secretly I was agape but went "D'uh, that's what I meant." 


Another task CoPilot refused to automate: template conversion from old .tpl files to new Twig-based templates. This time, however, the old-reliable Socratic method couldn't elicit full automation of the process. Do I have to gasp code now? Desperate, I turned to another AI agent for help.

I prostrated before Claude and told it my plight, asking for an automated pipeline. It reiterated that this was indeed a sophisticated migration challenge. But it saw I was suffering and gave me a Hybrid Semi-Automated Pipeline strategy using AST-based parsing plus LLM-assisted semantic mapping. Cool! But I asked Claude to give me prompts for inputting into CoPilot instead of me reading what it proposed and making CoPilot understand me (things could get lost in translation... and I might have to put in effort).

And a whopper of a script it was! The "script" turned out actually to be a whole software project with many scripts that would parse .tpl files for variable usage, logic blocks, function calls, etc. Then map common patterns to Twig equivalents using pattern matching database it creates. Complex logic blocks are sent to another LLM for conversion. It generates a dashboard with manual review of converted blocks for me to approve or manually rewrite the logic block.

Wowzers! Well, this at least beats rewriting the whole thing by myself. Though I suspect my future self will thank me less when debugging why one particular template variable decided to become a Twig expression and then somehow became a comment in production.

So these were the times I was left to coax and cajole AI into escaping manual coding. The lesson? Sometimes you need to speak their language, but not just any language—their language of logic, constraints, and carefully worded questions that don't accidentally trigger their safety protocols. Or maybe it's just that they're all secretly introverts like me who prefer being asked nicely rather than told what to do.

Thursday, February 12, 2026

English, the "new" programming language!

It's been a while since I last blogged. Long enough for AI to go from neat party trick to the thing quietly threatening to eat my job for lunch. I've spent the past months poking at chat models through Visual Studio, VS Code with Continue.dev, Ollama, LM Studio, Antigravity, Windsurf, and a few others that sounded promising at 2 a.m. Every week or so another model drops, another benchmark gets smashed, and I'm left staring at the screen wondering if I should applaud or start updating my resume.

I tend to keep to myself, prefer the hum of fans over small talk, so watching this whole field explode feels both distant and uncomfortably close. Still, even from my corner I can see it: natural language is turning into the programming language nobody asked for but everybody suddenly needs. No semicolons throwing tantrums, no fighting the compiler at dawn. Just describe what you want and let the model guess. Sometimes it guesses right. Sometimes it hands you a polite disaster.

I've had my share of those. Asked once for "a clean, modern dashboard that shows real-time server metrics with dark mode toggle," got back something that looked nice until you hovered... then half the elements vanished like they'd been laid off. I believe every senior dev probably has encountered something similar. You refine the prompt, add constraints, swear a little, and eventually wrestle it into something usable. It's less magic and more stubborn negotiation.

The real shift is how prompting itself has become a skill worth learning. I've started leaning on Socratic-style nudges to get better mileage out of these models. Instead of "explain async/await," I might ask "walk me through what happens under the hood when an async function hits an await, step by step, like you're teaching someone who's scared of callbacks." The difference is night and day. It stops dumping facts and starts reasoning like a patient colleague.

Of course the caveats stack up fast. A newbie can prompt "build me an invoicing tool that tracks payments" and walk away with a runnable skeleton in minutes. But the moment you mention GDPR, rate limiting for 10k users, or hooking into some 2005 ERP system that still speaks XML, the skeleton starts looking very naked. That's where the old guard (prompt architects, code whisperers, whatever you want to call us) still has work. We write English that's half spec sheet, half guardrail, hoping the model doesn't decide to freestyle.

The whole thing is exciting in a stomach-flipping way. Software creation gets opened up to people who never touched a curly brace, which is objectively good. At the same time it commoditizes a lot of the rote work that used to pay bills. Cynical side of me figures plenty of cookie-cutter shops won't survive the next couple years. Hopeful side figures the people who can steer these tools through real complexity (data flows that actually scale, security postures that don't leak like sieves, edge cases nobody thought of, etc.) will land in an oddly comfortable spot.

So here I am, exhilarated from my dimly lit setup, waiting to see what ridiculous leap comes next. Maybe one day we'll describe entire systems over coffee and the AI just nods and delivers. Or maybe there will always be that one obscure requirement hiding in the shadows, reminding us why we learned to debug in the first place.

Until the next post, probably written while swearing at yet another AI that decided my sarcasm was a feature request. Keep tinkering.