The End of Decades of Coding
Since Windows 98 launched, I have been in love with coding. All my free time was spent building software. Now, in 2025, agents read and write code for me, and it's not a bad thing.

Like most hot topics, a nuanced take like mine is rarely shared, but I know that most people feel somewhere in the middle of the AI journey right now. Here’s this post at a glance:
- Brief memoir of my coding history
- Initial reactions to LLMs becoming available
- Sharing advice that helped me embrace AI
- Why directing agents is in everyone’s future
My Coding Story Told Quickly
My first program was a QBASIC script on Windows 98. You would run the script then watch as it printed out about a dozen lines of text and exited. That text was ASCII art of a heart, what you might call a Valentine’s Day e-card.
Here’s a pretty realistic recreation:
Some years later, I was deep into a tool called Game Maker, moving from drag and drop to the basic scripting language. This set such a strong foundation, essentially shaping my brain to think in terms of imperative steps.
From there it’s website, game servers, protocols, databases, and eventually production software generating real value. “Do what you love, and you’ll never work a day in your life.”
The Culture Shock
Building software is so uniquely empowering. The recent era of “Learn to Code” was something I really believed in. Even outside of the career, coding is a creative outlet, a problem solver, and a testbed for applying new knowledge.
Then came generative AI, like ChatGPT, and everything changed.
My first reaction to most bold claims is skepticism, and AI was being instantly touted as a total game changer, able to completely replace most jobs. People had all sorts of reactions, from complete moral rejection to an instant outsourcing of all thinking, writing, and doing. It only got louder with image generation, then voice.. Companies slowly claiming to have “solved” entire arts and disciplines.
Honestly, it was sometimes demeaning, right?
The idea that AI could replace art, the claims that humans could be replaced so easily, just had zero supporting evidence. Even now as an AI enthusiast I have complete conviction that humans will remain a necessary element in anything worth building.
My Timeline of Embracing AI
Those who know me, know I am ruthlessly skeptical. “Too good to be true” is a guiding principle. Here are the raw, honest stages I went through:
It’s Just Pattern Matching
The first time I encountered a great fit for LLMs was extracting data or facts from text. The idea of “structured data” is very familiar to engineers, and unknowingly to everyone else. We finally had a way to make all text structured.
However, as far as generative text, it was still just a toy. Summarizing articles or tech specs I was writing was as far as I could trust it, but we already had some pretty good statistics based summary tools.
Fancy Demos Aren’t Real
Seeing a whole program unfurl into existence from an LLM was honestly a party trick, and one I had seen before. I still think people who don’t know software deeply were somewhat being fooled.
All developers know the saying “it’s just adding a button, how hard could it be” is a warning that real software fundamentally has layers of depth and nuance.
Does anyone remember WYSIWYG? Visual Basic? No-Code? The whole promise was that you could quickly put together software to meet your needs with “libraries” of functionality like puzzle pieces. The Unity game engine similar, as you can combine a few assets to make something that looks visually stunning but lacks any depth or substance.
These tools can absolutely empower people to produce quality stuff, but they can also produce a convincing but hollow shell:
While these tools settle down and find their place in the real world eventually, the initial impression and promise is far above the more reasonable eventual value.
We Finish Each Others Sentences
Copilot, for me, was the first time I found real personal value.
Like many, I knew the code that LLMs produced was dangerous if not carefully reviewed. Most research keeps repeating this point. Veracode’s studies claiming nearly half of AI code has security flaws. IEEE reports claiming similar statistics. The blast radius of AI is larger the more you ask it to do at once.
However, Copilot let me stay in command. As an experienced coder, I am often limited by my 130 wpm typing speed. I often know the next 4 or 5 lines to write. Classic autocomplete helped me keep up, but Copilot was even better. I could define a function, and it would produce the full implementation. One quick glance to confirm it’s what I was going to type, and done.
Coding much faster does not mean building much faster.
The catch here is that coding itself was rarely the bottleneck in software development. Writing code 10x faster does not translate to anywhere near 10x faster software development. The studiest reflect this: METR’s study showed a 19% overall slowdown, and HashiCorp found a single digit percent improvement.
This Feels So Random
Some time in 2023, I finally started to see some small success at larger scale work. Having AI add tests, review code, etc. It was still very much full of false positives or subtle issues: the same request might produce brilliance or garbage and it took time to figure out which you got.
Using AI really started to feel like gambling.
I would always still choose to assign tickets to the developers under my guidance at the time, because at least I knew it would always be done with the intention to be reasonable. However, I fully encouraged those developers to use AI when suitable, especially for quick prototyping or design iterations before starting the real work.
Many developers, myself included, hit a wall here. Social media posts claiming insane productivity, shipping whole apps in days, while the rest of us would spend a day on a single button. To this day it remains impossible to produce high quality complex software without deep expert human involvement, so I can only conclude this was all hype marketing.
My Personal Assistant
This is where many people are today, and where I was about 2 years ago.
It’s fair to say that today, every AI essentially guesses every next word based on what is most likely based on everything it has been trained on. What they call reasoning is essentially guessing a lot before picking a guess. This is why it always appears so impressive so quickly, because guessing the most likely thing will very often appease the person asking.
However, when people use human intuition and knowledge, and learn the shortfalls of things being “approximately correct”, they start to get more comfortable using AI output. At this time I was encouraging and teaching software developers under my guidance to adopt these tools.
The average person learned to look at the hands in images… naturally uncovering the weaknesses of AI.
When you know these weaknesses, you can work around them. You start to use it for research, but you tell it to report facts only. You generate a dozen logos and use your own design sense to explore the ones that have potential. You use AI as a second opinion to your own work, letting it test or review your writing.
It’s Agentic!
This is the real next step. This is the future.
I firmly believe in 10 years, all computer based work and some field work will be done by a human directing several AI agents. Currently, my agent is Claude Code CLI open in a handful of tabs, as I’m most familiar typing into a terminal. You may prefer Replit, or ChatGPT Agentic, or something not yet released!
“Agent”: When an AI is able to navigate a problem by itself and use tools appropriately to complete a task without help.
The rise of AI agents mean every person can direct a team.
My framing of this cultural shift is: as people move up through a career, they eventually start to direct others. One insightful decision can multiply when you have 10 employees who adopt it. With an experienced leader, the sum output of them with an inexperienced team can be many times higher than the leader working alone.
The true value of agents is giving everybody a team from day one. Sure, early in their career they may not understand how to direct the team. This is true with human teams today. However, the brilliance is that they can start to learn way earlier how to direct a team, where it just so happens that team is all AI to start.
The Ultimate Insight
So, with that journey we come to the current state and what I consider to be the most accurate insight. It is in fact why I started this company.
The next generation of emerging leaders will have years of leadership experience already.
Agents share many traits with people: they are sometimes wrong; they will try their best; they can’t read your mind. You need to use the same principles and practices that make an effective leader. You need to maximize autonomous time, and learn how to trust their work, because it’s unreasonable to expect perfection.
After so much time coding, I was starting to delegate that work more and more often. This blog title implicates AI, but it’s always been the case that coders eventually do less coding and some end up doing none at all.
There are many new opportunities right now:
- Gain leadership experience in your free time.
- Build and direct a team of any size instantly.
- Do more than one person ever could before.
- Access expert research and advice, on demand, on any topic.
For me, the opportunity I’m focusing on above all else is:
- Measuring and mentoring agent leadership skills.
Disagree? Let’s talk. I’m building assessments around this and want to hear counterarguments.
Hiring for the AI-first era?
Get notified about new posts on identifying agentic talent, measuring and improving AI-native proficiency, and evolving your hiring process.
No spam. Unsubscribe anytime.
Hiring engineers? Get early access to AI-native assessments