I learned something interesting recently: you can coax and cajole AI models into doing work they were initially reluctant to touch. Sounds like dealing with a particularly stubborn intern, doesn't it? Except the intern has access to your entire code base and will judge your variable naming choices silently but permanently.
If the volume of work is huge or some UI intervention is required, AI models refuse to automate. I found myself porting an old project to a new version. The new version is so new that it has several major breaking changes requiring rewrites of major portions of code. It's like moving houses where half your furniture suddenly refuses to fit through the door anymore.
I used CoPilot in Visual Studio to automate the migration as an exercise. CoPilot refused to automate certain parts, stating there were 700+ lines of field definitions across several files. It probably worried about my token usage, maybe? Or perhaps it was having a momentary existential crisis about whether these array definitions constituted poetry or just technical debt in disguise.
I tried Socratic prompting instead: asking how common these lines of code (mostly PHP array data) were, and how someone would extract and convert this cleverly with little or no manual intervention. Suddenly, CoPilot could see a way to automate this by writing a Python script to parse and convert the field definitions, batch process them, then verify the conversion automatically.
I was agape but went "D'uh, that's what I meant."
Another time CoPilot refused to automate because it involved straightforward UI work (not tedious coding). Which means I, as a user, must do it, not the AI agent. How rude and insubordinate. But I found a way around doing the actual work again. The AI had clearly read too many "AI will never replace human creativity" articles and was now overcompensating with unnecessary caution.
The Socratic method came to the rescue once more. I simply asked: when a user exports data from the UI in the old version, is the data being synthesized by code or coming from a DB table field? And does re-creating this via UI in the new version ultimately create DB table rows and fields computable/predictable from code?
It went into professor-mode and explained how the data was stored in the old version versus how the recreated UI would be stored in the new version. Just like that, CoPilot found a way to automate the entire process. It said, and I quote: "Can We Automate This? YES!" Then it created a new set of Python scripts to extract, convert, and verify everything. Meanwhile, I watched a YouTube video and came back to benevolently give it permission to run the scripts. Secretly I was agape but went "D'uh, that's what I meant."
Another task CoPilot refused to automate: template conversion from old .tpl files to new Twig-based templates. This time, however, the old-reliable Socratic method couldn't elicit full automation of the process. Do I have to gasp code now? Desperate, I turned to another AI agent for help.
I prostrated before Claude and told it my plight, asking for an automated pipeline. It reiterated that this was indeed a sophisticated migration challenge. But it saw I was suffering and gave me a Hybrid Semi-Automated Pipeline strategy using AST-based parsing plus LLM-assisted semantic mapping. Cool! But I asked Claude to give me prompts for inputting into CoPilot instead of me reading what it proposed and making CoPilot understand me (things could get lost in translation... and I might have to put in effort).
And a whopper of a script it was! The "script" turned out actually to be a whole software project with many scripts that would parse .tpl files for variable usage, logic blocks, function calls, etc. Then map common patterns to Twig equivalents using pattern matching database it creates. Complex logic blocks are sent to another LLM for conversion. It generates a dashboard with manual review of converted blocks for me to approve or manually rewrite the logic block.
Wowzers! Well, this at least beats rewriting the whole thing by myself. Though I suspect my future self will thank me less when debugging why one particular template variable decided to become a Twig expression and then somehow became a comment in production.
So these were the times I was left to coax and cajole AI into escaping manual coding. The lesson? Sometimes you need to speak their language, but not just any language—their language of logic, constraints, and carefully worded questions that don't accidentally trigger their safety protocols. Or maybe it's just that they're all secretly introverts like me who prefer being asked nicely rather than told what to do.