AI Replacing Us

2025-08-24

I don't think there's even a single software engineer out there that hasn't thought about AI replacing their job (although the assembly dev's probably aren't too worried). For the rest of us -- those that work with Python, JS, or really any language that has a large abundance of training data out there -- fears are much more real about AI outright replacing us in our work.

Ramp Before Ramp, I relied heavily on AI coding to help with coding tasks. Spamming tab was a frequent move in Cursor, and, on occasion, agents would help me take care of very mundane things, although with varying success. I knew AI programming was up and coming but hadn't really seen or experienced the promised fruits of Altman's labor (the "10x productivity gains"...) -- projects still took me all those long hours to complete and I hadn't seen a new magical app just pop up in front of me as promised.

Ramp is a very successful startup. They make lots of money, have lots of customers, move at light speed, and give their engineers everything they need to move at light speed. This does not exclude AI tools; in fact, we were given near unlimited access to the latest and greatest: Cursor, Claude Code, Gemini CLI, Codex, Devin, and more. Moreover, we were continually encouraged to use and employ AI throughout our work, which undoubtedly improved shipping speed.

When I started my internship with Ramp in May 2025, this was all a shock to me. I had never even really heard of Claude Code before, and never had unlimited access to these tools. I was, quite honestly, blown away and worried sick at the same time. I can even recall conversations with team members about how the future might look that really instilled some fear in me (one, notably, where humans would wear AI-enabled augmentation gear and be exploited for their amazing dexterity and precision motor control while AI did the decision making).

What really did it for me was when I stumbled across the infamous AI 2027 paper around 1 AM on my third week, Monday morning. I really don't think I had ever been more afraid, and I distinctly recall this being my first time something kept me up at night. I slept terribly, went to work fearful and resentful of the new AI tools that, once I thought mighty and somewhat worrying, were like my paved road to doom that I myself would take myself down.

Reality For the last six months, OpenAI and Anthropic have told us that we are dealing with PhD-level models that can act as your "Pocket PhD". We even saw OpenAI and DeepMind show models scoring Gold on the IMO, drafting great proofs out of thin air after reasoning for hours.

At the end of the day, I came into work in the morning. I worked, I had lunch, I deployed all the best of Claude Code and Cursor, and, guess what? I still left at 7 pm with standard forward progress (and some days backwards progress), and came back to work the next day, and coded every day in the same way.

I was in fact working with these so-called PhD models (Opus 4.1 all the time, GPT-5, Grok 4, you name it). It took me some time to realize that something was not really accurate about my AI-replacing fears. If we really had PhD-level AI models, then why the hell aren't we all out of work? If Opus 4.1 and GPT-5 are really this smart, why can't they understand my codebase better than me, write better code than me, and just outright replace me? Why am I even still employed?

My short answer as of Aug 2025 (subject to change!!!): Intelligence is not 1 dimensional.

Singularity and AGI I think AGI is one of the stupidest conceived topics about AI. However, AGI is an important concept because people see it as a key moment where AI becomes eligible for mass workplace automation. Following a popular definition of AGI, where AI is more intelligent than the median human intelligence, the idea is that AI would rapidly start to decimate the workforce and automate knowledge work.

This moment is commonly called the singularity, and is often depicted in news article hero images with a simple 2d graph (intelligence over time), and a slightly increasing linear curve for humans and a rapidly-increasing exponential curve for AI.

This mass job displacement not happened as of the time of writing (thankfully). In fact, AI is much more intelligent than the average human (with GPT-5 scoring 148+ on IQ tests). In my opinion, this is what happens when you project two very complex and different types of intelligence onto a 1-dimensional space, like an IQ test or benchmark with decimal score outputs.

How AI Could Replace Us Deep learning knows no limits of intelligence gain. In fact, those who betted against deep learning continually are proven wrong by advancing capabilities of AI models and ever-increasing intelligence. Despite this, advancing AI models will not be the direct cause for AI replacement of software engineers. Things get messy when demand for software product output dips below software demand supply.

It's important to know that I don't think we will hit a magic line where AI suddenly replaces human programmers. As AI models improve, our work will become more augmented and, if AI is as useful as it hopes to become, human programmer net output will increase per unit time.

If demand for software product output stays above the net human output with further advancing models and more augmentation, things actually stay just fine. Humans remain employed. If demand for software products dips below, things get nasty. This is where mass layoffs begin because less human engineers are needed (as each human engineer is more powerful than a few years ago).

The part that scares me is the situation where software product supply totally outruns supply because human programmers become vastly more powerful after being augmented with AI, even if demand stays the same or increases.

Regardless, my formula for AI replacement is simple: "If supply > demand because AI increases net software product supply, AI has started to replace human programmers.

My Takeaways and Policy I think we have many years before the above would happen. Even with the latest coding tools, I and several hundred other engineers worked hundreds of hours this summer and, while we definitely shipped more, have a LOT more to ship. I'd estimate our net software product output improved by 20% or so, not too much more than that.

My policy has more or less updated to the following:

  • Be elite in technical ability. As projects become more complex, so do problems that need to be solved. A great engineer using AI will beat out an OK engineer with AI any day.
  • Write more. Writing is one of the best ways to improve your cognitive function and learning ability.
  • Become experts of coding tools. You can't not embrace this anymore. Learn it.
  • Network, network, network. Humans are in control (for now), and choose friends over strangers.

And my takeaways on AI replacing us are this:

  • It probably won't happen in the next 1-2 years. The PhD-level models have really not created much of an economic dent in the job market
  • AI has to get much more practical to replace engineers. There is so much more to being a software engineer than writing code.
  • Everyone who fear mongers you about AI probably has some financial stake in doing so (but not always). Use your best judgement when someone tries to scare you about AI
Archangel Michael