Automate the Self-Contained. Engage the Rest.
Apr 29, 2026
Tamer El-Hawari
AI took the production. The work moved to what comes after.
Knowledge work used to be straightforward: you did the work, and the work produced the output. AI broke that link. Outputs now come first. Cheaply, quickly, before any thinking has happened. Most PMs haven't adjusted.
Effort and output used to be bound together. You couldn't have the synthesis without reading the interviews. AI broke the binding, and for the first time, the output can exist without the work that produced it. The distance between a task and a polished artifact collapsed from hours to minutes.
The work hasn't disappeared. It moved. Effort for the knowledge worker has relocated to what happens on top of the output: the thinking, the steering, the judgement. Our role shifted with it. We used to be producers. Now we manage a task force that delivers in minutes.
How does the work change if it moved to after the output?
A PM runs a deep research query and gets back a polished ten-page market analysis. The output looks authoritative. The work is just starting. The PM reads against what they already know, looking for where the AI's synthesis runs thin: sources that don't hold up, signals that look significant but aren't, segments ruled out for reasons that feel incomplete. They don't just read the document; they argue with it. The analysis isn't the deliverable. The deliverable is the segmentation call, the positioning decision, the forecast, and those require a mental model of the market, which the document alone doesn't provide.
Not every task looks like this. A weekly stakeholder update, a rephrased rejection email, a formatted deck. The output is the end. Send it.
But for the work that matters, the output is where the work starts. If work now happens on top of the output, the practical question becomes: when does the output need me on top of it, and when doesn't it? A task is safe to automate when it's self-contained: the procedure is clear, the context is available in the prompt, and the output can be verified at a glance. A weekly stakeholder update fits this. You know the format. The context (what shipped, what's next) is already in your head. You can read the AI's draft and spot an error instantly. Send it.
Automate when the task is self-contained: clear procedure, available context, verifiable output.
Engage personally when any of those is missing. If the procedure isn't clear, you don't yet know what good output looks like. If the context isn't available, the AI is working without what you know. If the output isn't verifiable, you won't know it's wrong until something downstream breaks. The PRD and the market analysis fail on all three. That's why they need you after the output.
Once you see the rule, something bigger opens up. When a task is self-contained, you don't just automate it. You can hand it into a chain of other automated tasks. Output feeds output feeds output. What used to be a week's work collapses into a sequence of AI steps with you only at the end.
This is where most PMs hesitate. The instinct to stay in control at every step is strong. But control isn't the point. Evaluation is. If each link in the chain meets the same test, stacking is safe, and your effort is better spent at the end of the chain, where judgement actually matters.
When the links aren't self-contained, stacking is where the debt compounds. Each output inherits the unchecked assumptions of the one before it. By the time you see the final artifact, you're looking at something built on a foundation nobody examined. And the errors aren't in the last step. They're in the first.
The shift isn't about using AI more or using it less. It's about knowing when the task is one you can hand off and when the task is still yours. Automate the self-contained. Engage the rest. The skill you were hired for doesn't happen before the output anymore. It happens on top of it.
Back to Overview