There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) “Write an A-Level paper on the themes in Shakespeare’s Romeo and Juliet.”
I propose a fourth: AI is now as good as it’s going to get, and that’s neither as good nor as bad as its fans and haters think, and you’re still not going to get an A on your report.
You see, now that people have been using AI for everything and anything, they’re beginning to realize that its results, while fast and sometimes useful, tend to be mediocre.
My take is LLMs can speed up some work, like paraphrasing, but all the time that gets saved is diverted to verifying the output.
My take:
I’ve only tried a handful of times, but I’ve never been able to get an LLM to do a grunt refactoring task that didn’t require me to rewrite all the output again anyway.
And you can do a lot of that with good IDEs.
The trick is giving it tons of context. It also depends on the LLM. Claude has given me the most success.
You have to invest in setting it up for success. Give it a really good context, feed it docs or other resources through MCP servers, use a memory bank pattern.
I just did a 30k contract with it where I hand wrote probably 20% of the code, and 75% of that was probably me just reviewing the diffs the LLM made like a PR. But that doesn’t mean I’m vibe coding, I feed it atomic operations and review each change as if it was a PR. I come away understanding the totality of the code so that I can debug easily when things go wrong.
You can’t just go “Here’s my idea; Make it.” That probably will never happen (even though that’s the kool-aid that’s being served), but if you’re disciplined and make the most of the tools available it can absolutely 3-5x your output as an engineer.
The LLM in the most recent case had a monumental amount of context. I then gave it a file implementing a breed of hash set, asked it to explain several of the functions which it did correctly, and then asked it to convert it to a hash map implementation (an entirely trivial, grunt change, but which is too pervasive and functionality-directed for an IDE to have a neat function for this).
It spat out the source code of the tree-based map implementation in the standard library.
LLMs are great at checking grammar in writing. That’s the other thing I’ve found they’re useful for 🤷
Basically, using LLMs to write something is always a bad idea (unless you’re responding to bullshit with more bullshit e.g. work emails 🤣). Using them to check writing is pretty useful though.