
Programming before AI agents felt simpler.
Not that the work was easy. I mean the shape of the work was easier to picture. I usually moved through one workflow at a time. For example:
Planning -> Coding -> Testing -> Deploying -> Deployed
Context switching was a heavy burden. Say I had 3 tasks in the backlog. Every time I came back to a task I had set aside, I needed time to gather the context back so my mind could return to where I left off. On top of that, coding itself demands full concentration, and itâs not the kind of work you can leave halfway through. You can lose the thread of what you were thinking.
A single project cycle could consume deep focus for days, sometimes a whole week. I wasnât only working on the task, I was also carrying all the knowledge about that task in my head. Progress looked linear because my attention was sequential too. Hereâs a simple simulation of how it felt before AI:
Notice how Focus has to start over again whenever the project context changes. And notice how progress can move forward or backward depending on the situation. Thatâs roughly the picture.
AI and the tempting illusion of parallelism
After AI coding agents arrived, two things changed drastically for me. First, coding can now be left running. I no longer need to be hands-on while coding. Second, AI can help gather context, which makes context switching easier. As a result, I can keep several threads of work alive at once.
Sounds like a dream for an engineer who has too many side projects but limited time đ . But the actual experience isnât as simple as parallel execution. What really happens is closer to managed parallelism.
An agent doesnât truly move forward just by existing. It only moves forward when I focus on it, read the latest state, review its output, or resolve some ambiguity. Hereâs the simulation I made to picture the current situation:
At this point I started seeing focus as separate nodes. Focus moves from one agent to another. That movement of my focus from one agent to the next is context switching. Context switching, the thing programmers always tried to avoid, has now become a job requirement in the Agentic era. AI agents can help with the context switch by giving us a summary, but our brains still need a warmup to understand the situation and make a decision.
Once I send a prompt to the agent, the work starts moving forward and I can shift to something else while waiting.
Planning -> Coding
Of course, no work flows perfectly. The agent might ask back or request confirmation. The implementation might not fully meet expectations. Or the agent might be working off a wrong assumption. These are pushbacks that pull our focus back in:
Planning -> Coding
â
Ask Confirmation
Pushback isnât really new, it existed before the agentic era. The difference now is that the loop of execution -> feedback -> execution can get much shorter, because the AI agent executes faster than we do.
Pushback isnât a failure, itâs a signal
I donât see pushback as a pure failure in defining requirements. Even with very extensive PRDs, we can still get spec changes mid-implementation. When an agent comes back from Coding to Planning, usually something needs to be clarified: scope is too broad, acceptance criteria arenât explicit, or I havenât shared the context it should have read.
What matters is not letting the same pushback happen over and over. One step back is fine, but the next instruction has to be better. Ideally, this becomes the moment to change the iteration from:
"Build this feature"
into:
"Build this feature with these constraints, read this file first, don't touch that part, and stop once this test is green"
The more agents you have running, the more important this kind of constraint-setting becomes. Without constraints, parallelism turns into noise.
Parallel work disguised as linear
The next problem appears when a single project is no longer one straight line. It can branch into sub-tasks:
A: Planning -> Coding -> Testing -> Deploying -> Deployed
âł Subplan -> Coding -> Testing -> Deploying -> Deployed
âł Subplan -> Coding -> Testing -> Deploying -> Deployed
This is where leverage feels enormous, but also mentally expensive. I made the simulation below to visualize it. Itâs no longer n projects, but n(1 + s) depending on how many sub-branches you have:
If you want to feel for yourself how focus jumps between agents, I also built an interactive version at /assets/simulations/2026/04/programming-work-on-agentic-era/. Give it a try before reading on.
I no longer just ask, âWhat state is project A in?â The question shifts to, âWhich part of project A needs my focus right now, and what context do I need to reload to help it move forward?â
Itâs like opening a bunch of browser tabs while debugging. At first it feels productive. After too many tabs pile up, you start forgetting which tab has the original error, which one is just docs, and which one you opened out of panic.
AI coding agents have the same flavor. The difference is, those tabs can write their own code.
The temptation to keep adding scope
This is the one I feel the most when a task is approaching the finish line. Thoughts like âis every use case covered?â and âshould we add feature x too?â become very tempting because the AI can knock those out in minutes. The downside is that the diff you have to review keeps getting bigger and more complex.
Lately Iâve been running the /simplify skill in Claude Code to help me trim and simplify complex code. I use a similar prompt to push back on plan documents the agent produces when they start getting too complex.
AI doesnât replace focus
My current conclusion:
AI doesnât remove the need for focus. It changes focus from doing one thing deeply into managing when many things are allowed to move.
This doesnât mean deep work disappears. I still need long stretches of focus to understand a tricky problem, make architectural decisions, or weigh tradeoffs that canât be handed off to an agent.
What changes is the rhythm. I used to drop into one piece of work and stay there for a long time. Now Iâm more often a scheduler for several pieces of work that are all alive at once.
The simulations above are obviously simplifications, because human focus isnât always at 100%. Sometimes I can context switch in minutes, but sometimes it takes hours just to remember why I asked the agent to do something in the first place. My experience playing Factorio helps here, because in that game I context switch all the time without burning out, since the reward is instant (dopamine).
The number of tasks doesnât have to be three either. I just like that number because it still feels human. Beyond that, my head starts to crack.
Not every task needs the full Planning, Coding, Testing, and Deploying flow. For small work, sometimes Coding and Testing are enough. For risky changes, Planning can be much longer than Coding.
Even so, this simple model helps me understand one thing: the key skill in the agentic era isnât just writing code faster. The skill that matters more and more is keeping focus healthy, setting clear constraints, and knowing when a piece of work should move forward, step back, or stop entirely.
How about you? Does the AI agent make your work feel lighter, or does it just push context switching beyond your control?