Measuring What Matters: How Inverta Tracks the Real Business Impact of AI

No items found.

Speakers

Additional resources

No items found.
Article

Measuring What Matters: How Inverta Tracks the Real Business Impact of AI

Return to resources
April 16, 2025

2025 is the year AI proves itself. The hype machine continues to do its job (who's guilty of making their action figure using ChatGPT's new image tools?) - now it’s on us to translate excitement into efficiency, creativity into clarity, and pilots into performance.

At Inverta, we treat AI as a multiplier.  It's a way to accelerate what we’re already good at: strategy, storytelling, and execution that drives results. But multipliers only work if you’re clear on what you’re multiplying for. For us, it’s two things:

  • Improving profitability through smarter, more efficient internal workflows.
  • Enhancing client retention by delivering clearer, more tailored, and more impactful deliverables.

In this post, we’re pulling back the curtain on how we’ve started measuring the impact of AI inside Inverta, and what our Q1 results are starting to tell us.

Because let’s face it: AI doesn’t make better marketing. People do. But better internal coordination, smarter discovery, and faster insight generation? That’s where AI helps us show up sharper, faster, and more in tune with what our clients need.

Tracking Team Engagement: From Curiosity to Contribution

Change doesn’t happen because someone sent a memo. It happens when your team feels like they’re part of it. So one of our first priorities this year was to create momentum and confidence in AI adoption.

Here’s what we did:

  • Rolled out individual AI tools with clear use cases, plus a monthly AI tool roll-up that gives the team a quick snapshot of what’s new or changing.  We monitor engagement by checking out if the video instructions are getting views.  We're finding that, yes, they are,** but** the views are coming when the team has a use for the tool, not before.  That's why we...
  • Re-launched the peer-led “AI Nerds” spotlight series by asking team members to show where they're getting value out of the AI tools we've launched.  We had six presenters in Q1, showing off use cases including personalization pitch decks, checking strategy decks against demand gen best practices, and summarizing where our projects add value to clients.  Asking the team to show off gets everyone else thinking about additional use cases, so to manage those we...
  • Created a Monday.com intake form where team members could request tools. In Q1, 23 requests came in (and we completed 15!).

As a result, we’re seeing:

  • Engagement is shifting from passive interest to active contribution. Team members are suggesting tools, not just reacting to them.
  • The dedicated AI Slack channel and monthly rollups aren’t just one-way updates anymore - they’re turning into feedback loops.
  • Most notably, consultants are now using synthetic personas to simulate client feedback by running deliverables through tools that “think” like our clients. That means fewer surprises in feedback cycles and sharper deliverables on the first pass.

Measuring Team Adoption: Usage Isn’t the Whole Story

There’s a difference between using a tool and using it well. In Q1, we saw a 162% increase in AI usage compared to Q4, but that’s not the full picture.

Here’s where it gets interesting:

  • Even as usage went up, sentiment declined (sentiment measures how well the response meets the intention of the prompter). Why? Not because AI is failing, but because we’re hitting the next phase of adoption: the reality check.
  • Team members shared things like:

    “I expected it to think for me, but it needed better inputs.”
    “I didn’t know which tool to use for which task.”

That tells us something important: people are eager, but they need clearer guidance. Especially when it comes to matching the right tool to the right task. Outreach agents aren’t great thought partners. Synthetic personas are wasted on summarizing content. When we get those matches wrong, frustration follows.

Top-performing use cases:

  • Synthetic personas for pre-feedback reviews.
  • BDR custom GPTs generating highly tailored, multi-touch sequences.
  • Messaging framework tools that aligned internal teams faster during strategy work.

Tools that fizzled:

  • Gemini’s in-app slide generation hasn't resonated yet - the slides we build are better.
  • General “thought partner” bots underperformed unless paired with custom instructions or clearer context.

That’s why Q2’s priority is not just more usage, it’s better usage. Starting with clearer documentation, better training on use case fit, and embedding guidance into our tools themselves.

Evaluating Service Delivery: Are We Actually Working Smarter?

So what happens when your team starts using AI in the right places? In Q1, we focused on two stages of the engagement lifecycle: discovery and content development. And we’re starting to see tangible impact.

Discovery: Deeper Insights, Faster Alignment

Yes, AI helped us parse transcripts faster. But more importantly, it helped surface better insights and drive earlier alignment with clients.

We saw:

  • A 25% increase in time spent on discovery.  Yes, this seems bad, but we're factoring in initial setup and prompt work necessary to get AI To help.
  • Better insights from discovery, which leads to more tailored recommendations.
  • Quicker approvals and clearer client feedback, likely because the insights were stronger upfront.

Translation? We’re spending slightly more time to get to significantly better outputs - ones that move us closer to the “final” version faster.

Content Development: Speed Without Sacrificing Quality

In January, we tested agent-driven prep work—summarizing interviews, extracting pain points, organizing content inputs. By February and March, we shifted focus to:

  • Using AI to generate messaging drafts
  • Testing those messages with synthetic personas
  • Building BDR outreach sequences directly from frameworks

Result: We cut prep time on messaging and outreach drafting by 70%.  That includes our development and client review and feedback - so we're not robbing Peter to pay Paul.

We cut prep time on messaging and outreach drafting by 70%.

That’s not just a nice-to-have, that’s a fundamental shift in how fast we can move without sacrificing quality.

Connecting the Dots to Profitability

Here’s the question every leadership team asks: Is this making us more profitable?

The answer so far? We’re getting there.

Efficiency Gains

AI tools like bulk interview summarization and sequence drafting let our teams move faster without needing more headcount. We’re not “automating jobs," we’re unlocking capacity to take on more.

We’ve even had a few moments in Q1 where a consultant supported additional clients or was able to work on Inverta’s business thanks to AI-generated prep (and were able to do so without burn out).  Still early days, but it’s promising.

Service Line Profitability

We haven’t been able to measure material margin increases yet. Why? Because it’s still early.  We’re drawing insights from 8 campaigns across 3 clients, and it's not honestly enough to be able to generate a definitive answer. But we’re now developing a staffing-to-deliverable ratio to track how AI affects service output.

If we can prove that consultants are handling more without burnout, or if project timelines shrink without scope cuts, that’s where margins start to shift.

We expect Q2 or Q3 to show the first real signals here.

Client Retention: AI as a Strategic Differentiator

The ultimate goal of AI in our business isn’t to dazzle clients with tech, it’s to serve them better.

And that starts with discovery.

We want clients to tell us they feel more understood, faster. We use AI to help us identify key themes, gaps, and questions before we even hop on a call.

When clients see that we’ve anticipated their thinking, they respond faster. That leads to fewer revisions, faster consensus, and more strategic conversations.

While we haven’t formally tracked NPS movement or renewal rates yet, but here’s what we are seeing:

  • Clients engaging more deeply in conversations about AI
  • Greater openness to trying co-developed tools, such as synthetic buyer personas, and frameworks
  • Consultants saying they’re getting to better insights faster
  • Faster approvals and fewer revisions point to higher confidence in our recommendations

In Q2, we’ll launch a formal feedback loop to track how much of that stickiness is directly tied to our AI use.

What We’re Learning

Here's a hard truth: AI isn’t plug-and-play. It takes calibration. It takes judgment. It takes real people learning when and how to use it.

Here’s what Q1 taught us:

  • Availability ≠ Adoption. Just because a tool is there doesn’t mean it’s being used or used well.
  • Impact per user matters more than total users. A few power users can deliver big results. But what delivers more success?  More power users.  Focus on intentionally granting access and working with those who are engaged and have the most impactful use cases before attempting to manage adoption for everyone.
  • Not everything needs to scale. Some tools are meant for specific roles or moments in the workflow.
  • Friction is normal. Early frustrations with tool choice or unclear prompts aren’t failure, they’re signals. We need clearer instructions, better defaults, and smarter ways to onboard people into the right use cases.

What’s Next

We’re moving from proving can we use AI? to how well is it working?

Our next steps:

  • Pairing usage data with qualitative feedback to track what’s actually driving impact.
  • Embedding tool prompts directly into process docs (e.g., “what tool should I use for this?”).
  • Launching role-specific training that clarifies “this tool is great for you when doing this.”

And longer term:

  • Formal KPIs around time saved, speed-to-value, and client satisfaction.
  • New service offerings that integrate AI natively (not just as a bolt-on).
  • Improved win rates and margin expansion as AI becomes part of how we differentiate.

What does success look like 6–12 months from now?

It’s not 100% adoption. It’s not flashy tech demos.

It’s this:

  • A strategy consultant running sharper discovery in half the time.
  • Turning content feedback into revised drafts in hours, not days.
  • A client saying, “You really got us this time.”
That’s the real business value of AI. And that’s what we’re measuring.
AI