This is a recap of our recent webinar, AI Success Stories in GTM: How Deloitte & AWS Moved from Strategy to Measurable ROI. Watch it on demand here.
According to McKinsey's 2025 State of AI report, 88% of global organizations now use AI in at least one business function, yet only 39% can point to a measurable impact on their bottom line. Our last webinar with Sandler, ServiceNow, and Acrisure explored why that gap exists in realizing the ROI of AI in revenue-generating operations.
In a recent conversation with leaders from AWS and Deloitte, they showed how the organizations that get ROI on their AI investments, all share a similar pattern. They didn't start with adding more AI tools. They started with a specific problem, resolved what was broken underneath, and built AI into how their teams already work.
Here are 5 actions our experts recommend to put to work with your team now.
Before you can measure whether AI is making an impact on topline revenue, you first need to ask whether you're measuring the right way.
Dana Therrien, SVP Commercial Excellence at Varicent, surfaced a blind spot many revenue leaders haven't fully accounted for. Sellers are already using AI, just not the AI the company deployed. They're using unsanctioned tools to prep for calls, research prospects, and build their own intelligence.
As Therrien put it: if the company can't see it happening, there's a direct mismatch between the ROI that leadership thinks it's getting from its investment and what's actually happening in the field.
This has two practical consequences. First, your internal ROI reporting likely understates what's already happening at the rep level. Second, if sellers are building their own intelligence systems outside sanctioned tools, you have no visibility into what they're relying on or how accurate it is.
Before benchmarking your AI investment, it's worth running a quick audit of what AI tools your reps are actually using. Manager conversations, rep surveys, and a review with IT can give you a more honest baseline than your deployment metrics alone.
Even when organizations invest in the right places, they tend to hit the same wall: the data underneath the system isn't ready for what sales and revenue leaders are asking AI to do.
As Zach Faithful, Deloitte Digital’s GTM Sales Transformation Leader shared, the disconnect is between what leadership expects AI to produce and the state of the data AI is actually working with. Teams want strategic, actionable insights from day one, but most haven't resolved the foundational questions first. Where does the data live? Who owns it? How clean is it?
When an AI model can't pull from a consolidated, trustworthy source, the model surfaces recommendations that sales reps don't trust. And when reps don't trust the recommendations, they stop using the tool. As Zach put it, if AI isn't giving sellers clear next actions, who to call, what to say, what to update, what to stop doing, the tool tends to get shelved.
Our latest study, Building for Compounding Growth: What 150+ Revenue Leaders Say About AI’s True ROI, reinforces this. 40% of leaders said AI outputs were only as strong as the business processes and training that supported them. Fixing that starts with understanding where your planning process breaks down.
Varicent’s Dana Therrien’s advice was to map your data handoffs. Where does territory output go? How does territory data reach comp? Where does it land after finance signs off? Every handoff is a potential break point where outdated or mismatched data distorts what comes downstream.
Most RevOps and sales operations leaders who run this audit can find two or three places where manual workarounds have been quietly covering problems with data quality for years. According to Therrien, those tend to be the highest-leverage entry points for AI-powered tools and automation.
Companies that make Q1 tend to make the year. AWS built an entire program around that premise.
Pilar Schenk, Vice President Global Sales Ops at AWS, walked through the Fast Start initiative. The goal was to get sellers ready to sell on January 1st rather than scrambling through February. AWS completed account plans 40 days earlier, saw quality of those plans rise 36% year over year, and recorded meaningful impact on attainment of revenue in Q1.
What Schenk was clear about is where the program started: with a business problem. How do we make sure sellers have real momentum from day one? From there, her team mapped the specific sales processes and habits they wanted to change, then applied AI to those targeted points in the workflow.
She was equally candid about how fast the landscape shifts. AWS launched two AI sales tools last year and shut both of them down before this webinar. The technology changed faster than the development cycle. Her takeaway was to build fast, stay agile, and stay anchored to the business problem.
As Varicent’s Dana Therrien explained, the AI implementations that tend to produce the most impact aren't the ones sales reps interact with directly. They're the tools that do the work upstream. By the time sellers arrive at SKO, AI has already shaped their territories, set their quotas, and structured their comp plans to reflect how they sell. The rep never had to engage with the tool. They just showed up to a better plan.
That aligns with what we found in our study, Building for Compounding Growth: What 150+ Revenue Leaders Say About AI’s True ROI. More than a third of revenue leaders surveyed said adoption accelerated precisely when AI became less visible in their day-to-day work, when it was embedded into existing workflows rather than introduced as a separate sales technology tool.
One concrete example Therrien gave was comp administration. When sellers have questions about their payout, those questions typically route to a comp manager who answers by email or closes a ticket. Generative AI can handle a meaningful share of those questions directly, reducing volume for the comp team and getting reps faster answers, without changing anything about how reps work.
The more useful question is how to build AI that delivers value in the planning process before reps ever need to think about it.
Leadership sign-off on an AI initiative tends to come early. According to Deloitte Digital’s Zach Faithful, what often takes longer is getting sellers and RevOps teams to trust an AI tool is where most programs run into trouble.
Faithful laid out three things that tend to move the needle in gaining adoption of AI in sales:
Co-design with end users. Buy-in tends to start with involving the people who'll use the system in shaping it. When sellers and RevOps teams help define what AI recommends and how AI surfaces those recommendations, those teams are more likely to trust the output and advocate for it with peers.
Make the logic transparent and controllable. Resistance tends to drop when AI recommendations include a visible rationale, here's why this territory assignment was made, here's how this quota was set, and when a human can override the output. Black-box recommendations, even accurate ones, tend to get ignored.
Give end users something back. Buy-in tends to come fastest when there's something concrete in it for the end user. Time back, clearer priorities, a quota they can actually defend. If AI adds steps or increases complexity, most teams abandon the tool regardless of how well the underlying model performs.
The warning sign, Faithful mentioned, is added complexity. If AI creates more manual data entry, more steps, or more cognitive load for a team that's already stretched, the program tends to fail regardless of how strong the underlying model is.
Across the webinar, the conversation kept coming back to the three steps for moving AI from pilot to ROI. The teams getting to ROI started with a specific problem, audited the process and data underneath it, and built AI directly into the workflow.
Three practical places to start:
Ask what AI sales tools your reps are using. Unsanctioned AI is already running in your pipeline. Understanding where it's showing up tells you more about what problems reps need solved than your internal roadmap does.
Map your planning process before buying anything. Audit every data handoff from territory to comp to finance to ops. Look for where manual workarounds are masking problems with data quality. Those tend to be the highest-value entry points for AI.
Define the business problem before scoping the tool. AWS didn't buy AI for Fast Start. They defined Fast Start as a business problem, then applied AI to the specific steps where AI could move the timeline. That sequencing separates most programs that compound from the ones that stay stuck in pilot.
The Go-to-Market Blueprint for AI Success gives you a 4-step execution plan to govern, measure, and scale AI across your revenue org, including ready-to-use slides you can take directly into your next leadership presentation. Download it here.