I was sure they were going to fire people.

As CTO of a cognitive assistants company, we worked with a client training NLP models to automate tasks that their experts did manually. The process that used to take 15 minutes, our system reduced to 2 or 3.

The experts knew it. Some didn’t want to share their knowledge during data labeling. They were afraid. So was I.

Optimizing a process almost always means reducing people. That’s what I thought.


What happened

There were no layoffs.

The client expanded their payroll.

The system didn’t replace the experts. It freed them. Where before they spent all their time on repetitive tasks, now they could pursue new business lines. Propose improvements. Detect failures that we later corrected in new training runs.

The final review always passed through their eyes.

The system was a tool. They remained the judgment.


The dopamine hit

Years later, the first time I used a code agent, I felt that fear again. The same underlying question: does this make me obsolete?

Claude Code, Codex, Cursor. The speed is real. You can have a connected API, with tests, in hours. What used to take weeks.

It’s a dopamine hit. For developers. For managers. For everyone. Almost like TikTok for C-Levels watching demos of features that are “already done”.

You can leave an agent running all night, throwing attempts like Monte Carlo until something works. Geoffrey Huntley calls it “Ralph”: the eventual consistency loop.

I still feel that tension. Every time an agent solves something that would have taken me hours, there’s a part of me that wonders: how long until this no longer needs me?

But speed doesn’t eliminate the fear.


The anxiety of beginners

Lately I talk to a lot of people who want to enter the industry. Interns. Juniors. They ask me how the market is.

Byung-Chul Han talks about the burnout society: we live under the pressure of constant hyperproductivity. That already existed. But now that pressure amplifies. Impostor syndrome has a new argument: an agent that writes code faster than you. And the question I asked myself as CTO reappears, but now in them.

But the underlying question is: does it make sense to learn programming if an AI can do it?

The short answer: yes.

The long answer is what I saw with that client: the experts’ fear was real, but reality was different. And I believe it applies here too.


What doesn’t change

The experts weren’t replaced because they understood the system. They knew when the model was right. They knew when it failed. They knew why.

A prompt doesn’t learn that.

Today you can create a website with an instruction. But knowing if it’s well built requires fundamentals. System design. Judgment.

Addy Osmani puts it clearly: we end up being supervisors of these technologies.

We don’t compete with the machine.

We supervise it.


Closing

The market changed. The anxiety of beginners is valid.

But the need for people who deeply understand how systems work, who know when to trust and when to correct, that need doesn’t disappear.

It grows.

With that client, the success wasn’t the model.

It was the combination.

That fear never came true. And I suspect this one won’t either.