The 'Actions Gap': Why Drafting Isn't Enough
AI drafts your response in 10 seconds. Then you spend 5 minutes executing it. That's the Actions Gap — and it's where most AI tools stop short.
It's 2:47 PM. Your AI assistant just drafted the perfect response.
The Slack message from Sales asked you to upgrade Meridian to Closed Won and create an onboarding ticket. Your AI wrote: "Done! I've updated the deal stage and created the onboarding ticket." Beautiful. Accurate. Professional. Generated in 10 seconds.
Now you have to actually do it.
Open Salesforce. Find the Meridian account. Navigate to the opportunity record. Change the stage to Closed Won. Save. Open Linear. Create a new ticket. Set the type to Onboarding. Assign it to the CS team. Add context from the Slack thread. Link the Salesforce record. Go back to Slack. Paste the ticket URL. Edit the draft slightly because you added the link. Send.
The AI saved you 10 seconds on the draft. You just spent 5 minutes on the actions.
This is the Actions Gap. And it's the biggest blind spot in AI productivity today.
What the Actions Gap Actually Is
Most AI tools for work follow the same pattern: read input, generate text, present draft. Whether it's an email assistant, a Slack bot, or a large language model-powered chatbot, the output is words. A nicely worded response that still requires a human to do the thing the text describes.
The Actions Gap is the space between what AI writes and what actually needs to happen. Every CRM update, every ticket creation, every calendar invite, every routing decision, every record change that follows the AI's nicely worded reply — that's the gap.
For a sales rep drafting outbound emails, the Actions Gap is small. You draft the email, you send the email, you're done. The action is the text.
For an Ops professional, the Actions Gap is enormous. You draft the reply, then execute a five-step workflow touching three different tools. The text is the easy part. The action is the job.
The Copy-Paste Tax
There is a specific version of the Actions Gap that deserves its own name: the copy-paste tax.
This is what happens when AI gives you a well-written draft, and then you spend the next several minutes copying data between tools to make that draft's promises real.
A real example from an Ops leader we interviewed:
"I have an AI that drafts my Slack responses. It writes: 'The Acme renewal is on March 15, current ARR is $48K, last NPS was 72, and the CSM is Jamie.' Beautiful. Accurate. Saved me maybe 30 seconds of typing. But I spent 4 minutes pulling that data from Salesforce, Gainsight, and our NPS tool to verify it. And then another 3 minutes updating the renewal tracker and creating a follow-up task in Asana. The AI wrote the easy part. I did the hard part."
The copy-paste tax is insidious because it feels like progress. You have an AI! It is drafting responses! But the time savings on the draft are dwarfed by the time spent executing everything around it.
The Math Nobody Talks About
We analyzed time-tracking data from Ops professionals who logged their activities for a full week, as part of our survey of 50 Ops leaders. Here is how the average Ops request breaks down:
Activity | Time | % of Total
Reading and understanding the request | 1 min | 7%
Gathering context from tools | 7 min | 47%
Drafting the response | 2 min | 13%
Executing follow-up actions | 4 min | 27%
Updating records and logging | 1 min | 7%
Total | 15 min | 100%
AI drafting tools optimize the 13% — the 2 minutes spent writing the response. That is real value. But it leaves 87% of the work untouched. Context gathering (47%) is the single largest time sink, which we covered in the Ops Tax series. But the second largest — follow-up actions at 27% — is the one nobody talks about.
For a team handling 200 requests per week at 15 minutes each, the action phase alone consumes:
- 200 requests x 4 minutes = 800 minutes = 13.3 hours per week
- That is a third of a full-time employee's capacity spent on post-response execution
AI that only drafts addresses 2 minutes per request. AI that also executes actions addresses 6 minutes per request — 3x the impact.
Five Actions That Should Never Require a Human
Not every action can be automated. But many of the most common Ops follow-up actions are repetitive, rule-based, and identical every time they happen. Here are five that consume hours per week and should not require human involvement.
1. Update a CRM Record
The manual version: Someone asks about a deal update. You respond in Slack, then open Salesforce, find the record, update the field, save, close the tab.
What should happen: The AI responds to the Slack message and updates the Salesforce record in the same step. One action triggers both the reply and the update. No tab-switching. No copy-paste.
Time saved per occurrence: 2-3 minutes | Frequency: 10-20 times per day for a typical RevOps team
2. Create a Ticket in Your Project Management Tool
The manual version: A bug report or feature request arrives in Slack. You acknowledge it, then open Linear or Jira, create the ticket, set priority, assign to the right team, add context from the thread, copy the ticket URL, paste it back into Slack.
What should happen: The AI creates the ticket from the Slack context, assigns it based on request type, includes all relevant context, and shares the ticket link in the same reply.
Time saved per occurrence: 3-5 minutes | Frequency: 5-15 times per day
3. Route a Request to the Right Person
The manual version: Someone asks a question in #ops-requests that belongs to Finance. You read it, determine who handles this type of request, DM that person with context, and reply to the original thread to confirm routing.
What should happen: The AI recognizes the request type, identifies the right owner based on your routing rules, forwards the full context, and replies to the original thread confirming the handoff.
Time saved per occurrence: 3-4 minutes | Frequency: 10-30 times per day
4. Schedule a Meeting With Context
The manual version: Someone asks to set up a call with a customer. You check Salesforce for account context, look up availability in Google Calendar, create the invite, add context to the description, and confirm in Slack.
What should happen: The AI checks availability, creates the invite with relevant account context pre-populated, and confirms in the original thread.
Time saved per occurrence: 4-6 minutes | Frequency: 3-8 times per day
5. Compile a Status Update From Multiple Sources
The manual version: Your CEO asks for a quick update on the top five deals. You open Salesforce for deal data, HubSpot for recent email activity, Slack for the latest conversations about each deal, and your forecast spreadsheet for projections. You synthesize everything into a message.
What should happen: The AI pulls current data from all sources, synthesizes the update, and presents a draft with live data — actual numbers from your systems, not generic placeholder text.
Time saved per occurrence: 15-30 minutes | Frequency: 2-5 times per week
Add it up. Just these five actions represent 5-10 hours of recoverable time per week for a single Ops professional. For a team of three, that is a part-time employee's worth of capacity locked inside copy-paste workflows.
Drafting AI vs. Action AI: The Honest Comparison
This is where the market breaks into two fundamentally different categories of tools — and where most teams are buying the wrong one.
Capability | Drafting-Only AI | Action-Taking AI
Generates text response | Yes | Yes
Pulls live data from tools | No (text based on training) | Yes (reads your CRM, PM tool, etc.)
Updates CRM records | No | Yes
Creates tickets automatically | No | Yes
Routes requests to right person | No | Yes
Schedules meetings | No | Yes
Proactive operation | No (reactive — waits to be called) | Yes (works before you read the message)
% of Ops workflow addressed | ~13% (drafting only) | ~87% (drafting + context + actions)
Best for | High-volume external email | Internal Ops request handling
The column on the right is not a feature upgrade. It is a different category of tool solving a different problem.
Most inbox AI tools — Superhuman, Fyxer, even standard LLM chatbots — sit in the left column. They are excellent at what they do. But they are optimizing the 13%, and leaving the 87% for you to handle manually. We covered why this matters specifically for Ops teams in Why AI Email Assistants Miss the Point.
The critical distinction in the right column is proactive operation. Most drafting tools are reactive — you call them when you need them. An action-taking AI works before you even read the message. Most Slack agents wait to be called. Ops teams need agents working before they read the request — context assembled, draft ready, action staged. By the time you see the Slack notification, context has already been gathered, a draft is ready, and actions are staged for your approval. You are not responding to requests; you are confirming that the work is already done.
Jeff Su's breakdown of AI agents — tools that don't just generate text but take actions across connected systems — illustrates exactly why the gap between drafting AI and action AI is so significant.
Before and After: A Complete Request Lifecycle
Let's walk through one request end-to-end to make the difference concrete.
Request: "Can we upgrade Apex Industries to Enterprise and notify their CSM?"
Before — AI drafting only:
- Slack message arrives in #revenue-ops
- AI drafts response: "Sure, I'll upgrade Apex to Enterprise and let their CSM know."
- You open Salesforce and search for Apex Industries (30 sec)
- Navigate to the subscription record (15 sec)
- Change plan from Professional to Enterprise (20 sec)
- Update the contract value field (20 sec)
- Save the record (5 sec)
- Find the CSM in your team directory and DM them with the upgrade details (45 sec)
- Open your task tracker, log the upgrade (30 sec)
- Return to Slack, edit the draft to include the Salesforce link and CSM confirmation, send (45 sec)
Total time: ~4 minutes 10 seconds. The AI contributed the 10-second draft. You contributed the 4 minutes of actions.
After — AI with action execution:
- Slack message arrives in #revenue-ops
- AI identifies request type: account upgrade + CSM notification
- AI pulls Apex Industries record from Salesforce, confirms current plan and CSM assignment
- AI drafts response: "Done. Apex Industries upgraded from Professional to Enterprise. CSM Jamie Chen has been notified. Salesforce record updated. [Link]"
- You review the draft and staged actions (20 sec)
- Click confirm (2 sec)
- AI sends the Slack reply, updates Salesforce, DMs the CSM, and logs the action
Total time: 22 seconds. The AI handled the draft and all the actions. You handled the review.
That is not a percentage improvement. That is an order-of-magnitude change.

Why "Draft + Human Execute" Is a Dead End
The counterargument is that the drafting-only model is fine — the AI does the thinking, the human does the clicking. Reasonable enough. But three problems break this argument in practice.
Problem 1: The clicking is the bottleneck. When we talk about the Ops Tax, we are not talking about time spent thinking about what to write. We are talking about time spent navigating between tools, copying data, and performing repetitive actions. An AI that drafts text but leaves the mechanical work to humans is solving the easy problem and ignoring the hard one.
Problem 2: Draft quality requires live context. Without access to your tools, the AI is drafting based on generic knowledge or incomplete information. A draft that says "I'll upgrade Apex to Enterprise" without knowing whether Apex is currently on Professional or Starter is just plausible-sounding text. With live data access, the draft can include actual plan names, real contract values, and specific CSM names. The quality of the draft and the quality of the actions are connected. You cannot fully optimize one without the other.
Problem 3: It does not scale. If your team handles 200 requests per week and each requires 4 minutes of manual actions, that is 13+ hours of human execution time per week — and that number grows linearly with volume. AI drafting does not bend this curve. Only AI that acts bends the curve.
Applying the Three Types Framework
Using the three types of Ops requests framework — Retrieval, Synthesis, Judgment — actions matter differently for each type, but they matter for all three.
Type 1 (Pure information retrieval — 70% of requests): The full lifecycle is automatable. AI looks up the data, drafts the response, and executes any needed follow-ups. Zero human time required if the action is simple (e.g., return a data lookup) or minimal review if the action involves a record update.
Type 2 (Synthesis + context — 20% of requests): AI pulls data from multiple sources, assembles a synthesis, drafts the response, and stages complex actions (create a ticket, update multiple records, notify stakeholders). Human reviews the synthesis and approves the actions. AI executes. Human time: 2-3 minutes per request instead of 30-90.
Type 3 (Judgment calls — 10% of requests): The human makes the call. But AI can prepare the full briefing (pulling all relevant data from every tool) and stage the follow-up actions for one-click execution once the decision is made. AI handles the prep and the execution overhead. Human handles the decision.
In every type, the value of action execution extends well beyond what drafting alone provides.
What "AI That Takes Action" Actually Means
This term gets used loosely, so it is worth being precise about what action execution does and does not mean.
It does NOT mean:
- AI autonomously making decisions without human oversight
- AI doing things you did not ask for
- AI with unrestricted system access
- Replacing human judgment with automated workflows
It DOES mean:
- AI that proposes specific actions based on request analysis ("I'll update Salesforce and create a Linear ticket")
- Human reviews and approves the proposed actions before execution
- AI executes approved actions through connected integrations
- Full audit trail of what was done, when, and by whom
- Rollback capability if something goes wrong
The model is: AI proposes, human approves, AI executes. Not AI goes rogue. This distinction matters because the biggest objection to AI actions is trust. And the answer to the trust question is: you are always in the loop. The AI eliminates the manual clicking between "I decided what to do" and "it is done."
Think of it as an extremely competent assistant who prepares everything and waits for your nod before pressing send.
How to Evaluate Any AI Tool on the Actions Dimension
If you are currently evaluating AI tools for your Ops workflow, here is a practical checklist to separate drafting tools from action tools.
Basic questions:
- Does the tool connect to your CRM, project management tool, and communication platforms?
- Can it create new records (tickets, tasks, contacts) in those tools?
- Can it update existing records based on incoming request context?
- Can it route requests to specific people or channels automatically?
Advanced questions:
- Does it propose actions proactively, or do you have to specify every action manually?
- Is there a review and approval step before execution?
- Is there an audit trail of all actions taken?
- Can it chain multiple actions from a single request (e.g., update CRM + create ticket + notify person)?
- Does it handle errors gracefully when an action fails mid-execution?
Integration depth questions:
- How many services does it integrate with? Read-only or read-write?
- Can it handle your custom fields and specific tool configurations?
- Does it respect your permission model (certain users update certain records)?
If a tool scores well on drafting quality but zero on actions, it is solving 13% of your problem. That is worth knowing before you buy.
The Bottom Line
AI drafting is table stakes. Every major AI productivity tool generates a decent text response in seconds. That is not a differentiator anymore.
The new frontier is execution. The 87% of Ops work that happens after the draft: gathering live context, updating records, creating tickets, routing requests, scheduling meetings, notifying stakeholders. The mechanical, repetitive, multi-tool workflow that consumes hours every day.
The Actions Gap is real, it is measurable, and it is the single biggest untapped opportunity in Ops productivity right now.
Closing the gap does not require better text generation. It requires AI that understands not just what to say, but what to do — and then does it.
Tools like Runbear are designed specifically for this: monitoring requests across Slack, email, and calendar; pulling live context from 2,000+ connected services; and taking action — updating CRMs, creating tickets, routing requests — without requiring you to open a single additional tab. Not because drafting is unimportant, but because drafting without execution is only 13% of the solution.
Key Takeaways
- The Actions Gap is the 87%. AI drafting tools optimize the 13% of Ops work spent writing responses. Context gathering (47%) and action execution (27%) remain entirely manual.
- The copy-paste tax is a real cost. For a team handling 200 requests per week, the action execution phase alone consumes 13+ hours — a third of a full-time employee's capacity.
- Draft quality and action quality are linked. An AI without access to your live tools cannot draft accurately. Context access enables both better drafts and actual execution.
- Proactive matters as much as action. The most powerful AI tools do not wait to be called — they work before you read the message, so that by the time you see the request, the work is already staged for review.
- "AI proposes, human approves, AI executes" is the right model. Automation without oversight is not the goal. One-click execution after human review is.
Start Measuring Your Actions Gap
This week, for every Ops request your team handles, time three separate phases: reading + context gathering, drafting, and executing follow-up actions. Do this for five days.
By Friday, you will have a precise measurement of your own Actions Gap. Most teams find the action phase consumes 25-35% of total request time — often more than the drafting phase they thought was the bottleneck.
Once you can see the gap, you can decide whether to keep filling it manually or find tools that close it for you.
This is the final post in the "Why Existing Tools Fail" series. Read the full series: Superhuman vs. Fyxer vs. Runbear and Why AI Email Assistants Miss the Point.
From the "Ops Tax" series: The Ops Tax | I Interviewed 50 Ops Leaders | Three Types of Ops Requests
