Back to list

The Three Types of Ops Requests (And Why Only One Can Be Automated)

70% of Ops requests are pure lookups. 20% need cross-tool synthesis. 10% need human judgment. Here's a framework to categorize yours and reclaim your time.

It’s 10:14 AM. You’ve been at your desk for an hour. Your Slack already looks like this:

@sales-team: “Hey, what’s the ARR for Acme Corp?”
@product-lead: “Can you pull together why Globex churned last quarter? Need it for the board deck.”
@ceo: “We need to figure out how to handle the Initech pricing situation. Can you put together options?”

Three messages. Three requests. But they are not the same kind of work.

The first takes 30 seconds if you know where to look. The second takes 45 minutes of cross-referencing four different tools. The third requires a week of analysis, three conversations, and a judgment call that could affect a $200K account.

Most Ops teams treat every request the same way: open Slack, read the message, start clicking through tools. But the path to reclaiming your time starts with a simple realization. Not all requests are equal, and they shouldn’t be handled equally.

After interviewing 50 Ops leaders and analyzing thousands of internal requests, we found that every Ops request falls into one of three types. Understanding which type you’re dealing with changes everything about how you prioritize, automate, and ultimately reclaim the Ops Tax your team pays every day.

Type 1: Pure Information Retrieval

What it is: A question with a single, factual answer that lives in one of your tools.

How much of your workload: ~70%

Examples:

  • “What’s the ARR for Acme Corp?” (Salesforce)
  • “When does the Globex contract renew?” (Billing system)
  • “Who’s the account owner for Initech?” (HubSpot)
  • “What’s the status of ticket #4521?” (Linear/Jira)
  • “How many support tickets did Acme file last month?” (Zendesk)

What makes it a Type 1: There is one correct answer, and it lives in one place. The work is not thinking. The work is finding.

This is the category where Ops professionals spend the most time on the least valuable activity. Remember the data from our survey of 50 Ops leaders: 5.3 tools checked per request, 12 minutes gathering context for every 2 minutes typing a response. For Type 1 requests, most of that time is pure navigation overhead. You already know the answer exists in Salesforce. You just need to open Salesforce, find the right record, locate the right field, and copy it back into Slack.

Time cost: 5–15 minutes per request (mostly navigation and lookup)

Automation potential: Fully automatable. An AI with access to your tools can answer these in seconds. No judgment required. No synthesis needed. Just retrieval.

Time savings if automated: 3–5 hours per day for a typical Ops team

Type 2: Synthesis + Context

What it is: A question that requires pulling information from multiple sources and connecting the dots.

How much of your workload: ~20%

Examples:

  • “Why did Globex churn?” (Needs: CRM deal history + support ticket patterns + product usage data + sales call notes)
  • “Should we prioritize the Acme renewal?” (Needs: revenue data + relationship history + expansion pipeline + competitor intel)
  • “What’s happening with the Initech implementation?” (Needs: project tracker + support tickets + recent Slack conversations + calendar history)
  • “Can you give me a customer health summary for Q1?” (Needs: usage metrics + NPS scores + support volume + revenue trends)

What makes it a Type 2: There is no single place where the answer lives. The answer is assembled from fragments scattered across multiple tools, and it requires someone to synthesize those fragments into a coherent narrative.

This is the category where Ops professionals add real value but burn the most energy. A Type 2 request isn’t hard because the individual pieces are complex. It’s hard because the pieces are in five different places and none of them talk to each other.

One RevOps director described it perfectly in our interviews:

“The answer to ‘why did they churn’ is never in one tool. It’s 30% in Salesforce, 20% in Zendesk, 20% in our product analytics, 15% in email threads, and 15% in someone’s head. My job is to be the human API that connects all of them.”

Time cost: 30–90 minutes per request (mostly cross-referencing and synthesizing)

Automation potential: Partially automatable. AI can gather the raw data from each tool and draft a synthesis. But a human should review the narrative, add institutional context, and validate the conclusions. Think of it as AI doing the 45-minute research phase and a human doing the 10-minute review and refinement.

Time savings if automated: 20–60 minutes per request (AI handles gathering, human handles judgment)

Type 3: True Judgment Calls

What it is: A decision that requires institutional knowledge, political awareness, and human judgment.

How much of your workload: ~10%

Examples:

  • “Should we offer Acme a custom pricing plan?” (Requires understanding competitive dynamics, internal precedent, relationship history, and strategic priorities)
  • “How do we handle this escalation with our biggest customer?” (Requires reading the room, understanding stakeholder dynamics, and making a call on risk)
  • “Should we change our onboarding process based on the feedback from the last 5 implementations?” (Requires weighing conflicting signals and making a strategic bet)
  • “Which vendor should we go with for the new CRM migration?” (Requires evaluating trade-offs that can’t be reduced to a spreadsheet)

What makes it a Type 3: There is no objectively correct answer. The right call depends on context that doesn’t live in any tool: company politics, unwritten priorities, relationship nuances, risk tolerance, and strategic vision.

This is the work Ops professionals were actually hired to do. This is where experience, judgment, and institutional knowledge matter most. And it’s the category that gets squeezed out when Types 1 and 2 consume the entire day.

Remember the time allocation data from our survey: Ops professionals spend just 7% of their time on strategic work. Type 3 requests are strategic work. The other 93% is overhead.

Time cost: Hours to days (research, deliberation, stakeholder alignment)

Automation potential: Not automatable. But AI can dramatically accelerate the prep work. Instead of spending two hours gathering data before you can even start thinking about the decision, AI can hand you a briefing document with all relevant context pre-assembled. You skip straight to the judgment call.

Time savings if automated prep: 1–3 hours of prep time per decision

The Three Types at a Glance

Here’s how all three types compare across the dimensions that matter most for automation planning:

Type 1: Retrieval — ~70% of volume, 5–15 min per request, answer lives in one tool, fully automatable. AI replaces the lookup entirely. No human role needed. Example tools: Salesforce, HubSpot, Jira. Weekly time savings (200 req/wk): ~35 hrs.

Type 2: Synthesis — ~20% of volume, 30–90 min per request, answer lives across 3–6 tools, partially automatable (AI does 80%). Human reviews and refines. Example tools: Cross-tool synthesis (Runbear, n8n + integrations). Weekly time savings: ~20 hrs.

Type 3: Judgment — ~10% of volume, hours to days per request, answer lives nowhere (it’s a decision). Not automatable — AI preps the brief. Human owns the decision. Weekly time savings: ~10 hrs.

The most important row is the last one. Full automation of just the retrieval category — Type 1 — frees up 35 hours per week. That’s nearly a full-time position reclaimed without hiring a single person.

The autonomy spectrum in AI research maps almost exactly to the Type 1/2/3 framework. This IBM Technology breakdown explains how agents are designed for different levels of task complexity — from reactive lookup to multi-step execution:

From reactive lookup agents to fully autonomous decision systems — the same autonomy spectrum governs how different Ops request types require different levels of AI involvement.

The 80/20 Opportunity Most Teams Are Missing

Types 1 and 2 represent 90% of all Ops requests. They are fully or partially automatable. And yet most teams are handling all three types the same way: a human opens Slack, reads the message, and starts clicking through tools.

The math is straightforward:

  • Type 1 (70% of requests): Fully automatable. Zero human time needed.
  • Type 2 (20% of requests): AI handles 80% of the work. Human reviews and refines.
  • Type 3 (10% of requests): Human handles with AI-prepared context.

If your team handles 200 requests per week, that means:

  • 140 Type 1 requests that could be answered automatically (saving ~35 hours/week)
  • 40 Type 2 requests where AI could cut prep time by 75% (saving ~20 hours/week)
  • 20 Type 3 requests where AI could prepare the brief (saving ~10 hours/week)

Total potential time savings: ~65 hours per week. That’s more than one full-time employee’s worth of capacity freed up, without hiring anyone.

Why Most AI Tools Only Solve Type 1

If the opportunity is this clear, why hasn’t anyone solved it?

The answer is that most AI tools are designed for a different problem. Email AI assistants like Superhuman and Fyxer are built for customer-facing professionals who live in their inbox. They’re excellent at Type 1 for email: drafting replies, scheduling sends, and organizing your inbox. But they don’t touch the cross-tool synthesis that defines Type 2.

Chatbots and knowledge bases can answer pre-programmed questions. But they break down the moment a request requires pulling live data from Salesforce and combining it with context from Zendesk and usage data from your analytics platform.

The gap in the market is Type 2. The cross-tool synthesis problem. The “human API” problem that RevOps director described. It’s the hardest technical challenge but it’s also where the most time is wasted.

Tools like Runbear are specifically designed to address this gap — connecting across Slack, email, and calendar while pulling context from 2,000+ services to handle both Type 1 lookups and Type 2 synthesis. The key distinction from email-only AI tools: Runbear doesn’t just draft a response, it takes action. It retrieves the data, assembles the synthesis, and delivers the answer — without you needing to open a single tool. But regardless of which tool you evaluate, the framework is the same: any AI solution for Ops that only handles Type 1 is solving 70% of requests but leaving the most painful 20% completely untouched.

Your Five-Day Request Audit

Before you evaluate any tool, buy any software, or change any process, do this first.

For the next five business days, categorize every incoming request your team receives.

For each request, log:

  1. Who asked
  2. What they asked
  3. Which type (1, 2, or 3)
  4. How long it took to resolve
  5. Which tools you needed

At the end of the week, tally the results. You’ll see your own version of the 70/20/10 split. Maybe yours is 60/30/10. Maybe it’s 80/15/5. The exact numbers matter less than the pattern.

Once you can see the pattern, you can make targeted decisions:

  • If Type 1 dominates: You need better tool access and automation. The fix is relatively straightforward.
  • If Type 2 is higher than expected: You need cross-tool synthesis. This is a harder problem but it’s also where you’ll unlock the most time.
  • If Type 3 is consuming disproportionate time: Your team may be under-scoped. Type 3 work is supposed to be the job.

The Bottom Line

Not all Ops requests are created equal. Treating a Salesforce lookup the same way you treat a churn analysis is like using a bulldozer to plant a flower. The tool doesn’t match the task.

The three-type framework gives you a lens to see where your time actually goes, and where automation can make the biggest impact. Type 1 is the low-hanging fruit. Type 2 is the hidden goldmine. Type 3 is the work that makes Ops teams indispensable.

Start the audit this week. In five days, you’ll have more clarity about your team’s workload than most Ops leaders get in a year.

This is the third post in our “Ops Tax” series. Read the full series: The Ops Tax: The Hidden Cost of Waiting on Your Operations Team and I Interviewed 50 Ops Leaders. Here’s What They Told Me.