When Customer Insight Reveals You’re Solving the Wrong Problem
How foundational research and AI decision advisory across four studies helped a global enterprise software company pivot its agentic AI strategy mid-project, saving months of development and millions in opportunity cost.
Photo by Lilian Velet on Unsplash
Imagine you’re a product team at one of the world’s largest enterprise software companies. You have a clear vision for how agentic AI will transform your product in two major industry verticals. You’ve built prototypes. You’ve gotten internal buy-in. You’re ready to ship.
There’s just one question nobody has asked yet: Is this the right thing to build?
That’s the question our client brought to us. And the answer changed everything.
The Client
Our client is a global enterprise technology company with products used by thousands of organizations across industries. They were in the process of expanding their platform with AI-powered capabilities, specifically agentic AI solutions designed for vertical markets. Two verticals were in focus: financial services and healthcare.
The product team had already developed working prototypes for both verticals: agentic AI solutions for loans in financial services and medical claims processing in healthcare.
The prototypes were technically impressive. The team was energized. But they had a gap: they hadn’t yet pressure-tested these solutions against the real-world workflows, motivations, and pain points of the people who would actually use them.
That’s where Amplinate came in.
The Challenge
The client’s leadership understood something that many product organizations resist acknowledging: the distance between a compelling internal demo and a product that actually solves the right problem for the right people is enormous. They had conviction about their agentic AI prototypes but the intellectual honesty to want them validated, or challenged, by the market.
They came to us with a specific ask: help us understand our target users deeply enough to know whether these agentic AI solutions are aimed at the right problems. And if they’re not, help us figure out where to aim instead.
This isn’t a project you can answer with a survey or a few usability tests. The client wasn’t asking us to evaluate button placement or information architecture. They were asking us to evaluate product-market fit at the concept level before they’d committed engineering resources at scale. That requires a fundamentally different kind of research. It requires AI decision advisory grounded in deep customer insight.
The Approach
We designed a research program that combined two complementary methods across two industry verticals, four discrete methodologies in total. Each study was built on the same foundation, adapted for the unique context of the vertical and the specific agentic AI solutions being evaluated.
Foundational Customer Insight Research
The first component of each study was deep, qualitative foundational research. We conducted 90-minute in-depth interviews with professionals across both verticals: loan officers, underwriters, credit officers, and operations leaders in financial services; and revenue cycle specialists, claims processors, authorization analysts, and audit managers in healthcare.
Across 24 participants (12 per vertical), we mapped the real workflows, motivations, and pain points of the people the client’s products would need to serve. We weren’t looking for surface-level feedback. We were excavating the structures that shape how these professionals think about their work, their tools, and their willingness to adopt AI.
From this customer insight research, we developed detailed personas grounded in Jobs to be Done. Each persona captured not just demographics and role descriptions, but the underlying motivations that drive behavior, the difference between a loan officer who sees herself as a “quarterback” racing to close deals and a credit officer who sees himself as a “guardian” defending the bank from risk. These aren’t marketing segments. They’re behavioral archetypes that reveal what users will and won’t accept from agentic AI.
We also derived AI Design Principles from the research, a set of guidelines specific to each persona that define the characteristics any AI experience must embody to earn trust and adoption. Principles like “accuracy” and “trust” aren’t platitudes when they’re grounded in direct observation of professionals who told us, in no uncertain terms, that a single AI error could result in regulatory fines and unhappy customers.
Concept Evaluation
The second component was a structured evaluation of the client’s existing prototypes. Within the same 90-minute sessions, after the foundational exploration, we walked participants through interactive prototypes and asked them to evaluate the agentic AI solutions against their real workflows.
This sequencing was intentional. By the time participants saw the prototypes, we had already established deep context about their daily reality. We understood what they cared about, what frustrated them, and what they feared about AI. So when they reacted to the prototypes, we weren’t just hearing “I like it” or “I don’t like it.” We were hearing responses filtered through a rich understanding of why.
The concept evaluation confirmed that the prototypes addressed real needs. Participants found genuine value in the solutions. But it also revealed something the client hadn’t anticipated: the problems these prototypes solved were not the most painful problems these users face.
The Discovery
This is the part of the story where the customer insight research at Amplinate is differentiated from “just gathering data.”
In financial services, the agentic AI prototypes focused on one part of the process. Participants saw clear value in those capabilities, particularly a comparison feature that one participant called a “killer feature.” But the foundational research revealed that the top pain points in this vertical weren’t about the feature at all. They were about system fragmentation, disorganized communication, and a complete lack of shared visibility across teams.
Loan officers described spending significant time manually tracking things in spreadsheets, chasing colleagues for status updates through email, and coordinating multiple parties, processors, underwriters, title companies, attorneys, through disconnected channels. The thing keeping them up at night wasn’t a specific task that needed AI automation. It was that their tools didn’t talk to each other, and nobody had a single source of truth for where things are in the pipeline.
Several participants went further: they expressed concern that the proposed features would actually make their problem worse by not integrating with their existing workflow. This is a finding that no amount of usability testing on the prototype would have surfaced. It requires the kind of deep contextual understanding that comes from foundational customer insight research.
In healthcare, the pattern was even more striking. The client had built a solution designed to help staff quickly distill information into actionable summaries. On paper, it made perfect sense, but when we sat with the people who actually do this work every day, we heard a different story.
Yes, navigating a lot of text is painful. But it’s not the most painful thing. The most frustrating challenge was dealing with volatile, unstandardized rules from that change constantly and without notice. For others, the biggest struggle was system fragmentation, toggling between what they described as “antiquated mainframes” and disparate portals to process claims. I personally worked in Healthcare over 20 years ago, and it was the same story then as it is now.
And here’s the nuance that only deep customer insight reveals: even within the problem space the prototype was designed for, the solution wasn’t quite right. Participants told us they didn’t actually need the proposed feature. What they actually needed was precise extraction of discrete data points: the modifiers that determine whether a claim gets approved or denied. A summary, no matter how well-crafted, wasn’t actionable enough for their decision-making.
Mapping the Real Opportunities
With the foundational research complete and the concept evaluation providing clear signal on what resonated and what missed, we built opportunity maps for each vertical. These maps prioritized the challenges and automation opportunities we’d uncovered, organized by the personas who experience them, and ranked by severity and frequency.
The opportunity maps told a clear story: the client’s existing prototypes addressed real needs, but several higher-priority opportunities had been overlooked entirely.
Problems like communication and status across teams, integration with existing systems, and data extraction (rather than static summarization) in healthcare, these were the opportunities that would deliver the most value and differentiation.
We also delivered something the client hadn’t originally asked for but quickly recognized as one of the most valuable outputs of the engagement: a cross-vertical design principles scorecard. By analyzing the personas and their AI expectations side by side, we identified universal principles that applied across all user types, shared principles that appeared in multiple personas, and persona-specific principles unique to individual roles.
This scorecard became a living tool the product team could use to evaluate any current or future agentic AI feature against the design principles derived from real user research. It’s the kind of artifact that pays dividends long after the research engagement ends, because it encodes customer truth into the decision-making process.
The Pivot
Here’s where the story takes the turn that makes this engagement worth writing about.
A traditional research engagement would have ended with a report. We’d present findings, offer recommendations, hand over the deliverables, and move on. The client would take our insights and, at some point in the future, try to translate them into product decisions.
That’s not what happened here.
Because the findings were so clear and the implications so significant, the client made the decision to act immediately. Mid-project, while we were still delivering the final research readouts, we were brought into product strategy sessions to help reshape the roadmap. This is what AI decision advisory looks like in practice: not a slide deck with recommendations, but active participation in the strategic decisions that determine what gets built and why.
Our recommendation wasn’t to abandon what they’d built. The prototypes had genuine value, and participants confirmed that. Our recommendation was to keep moving forward with the existing agentic AI solutions while adding new, higher-priority items to the roadmap, items that addressed the problems customers actually ranked as most painful.
In financial services, that meant evolving the tools into a platform that addressed communication, visibility, and system integration, not just task automation. In healthcare, it meant pivoting from narrative summarization to dynamic, context-aware data extraction that could adapt to the different scenarios, payer rules, and specific data points each decision required.
We worked with the product team to create a revised product roadmap grounded in the research. This wasn’t a theoretical exercise. It was a working document that reprioritized features, identified new solution areas, and gave the team a research-backed rationale for every decision on the roadmap.
The Impact
The most immediate impact was a mid-project course correction that would have been impossible without the depth of customer insight this research program provided. The client didn’t have to wait until after launch to discover a product-market misalignment. They caught it before committing the engineering resources that would have made pivoting exponentially more expensive.
Consider what that means in practical terms. Enterprise software development cycles are long and expensive. Redirecting a product after it’s been built and shipped means not just the cost of redevelopment, but the opportunity cost of everything the team could have been building instead, plus the reputational cost of launching something that doesn’t fully land with customers. By surfacing the misalignment early, the research paid for itself many times over.
But the impact went beyond the immediate course correction. The personas, design principles, and opportunity maps we delivered became foundational artifacts that the product organization uses to evaluate all feature decisions, not just the ones we studied. They’re reference points in design reviews, prioritization discussions, and roadmap planning sessions. The research didn’t produce a one-time report. It produced a decision-making framework.
And perhaps the strongest signal of impact: the client has asked us to repeat this exact research program for additional product verticals across their business. The methodology worked. The customer insights were actionable. The strategic value was clear enough that they want to replicate it everywhere.
Why This Matters Beyond This Client
There’s a larger lesson in this story that applies to any company building agentic AI products right now.
The AI gold rush has created an environment where product teams feel enormous pressure to ship AI features quickly. The technology is moving fast. Competitors are moving fast. The temptation is to start with what’s technically possible, what the models can do, and work backward to find use cases. But “technically possible” is not the same as “strategically valuable.” And “use case” is not the same as “most important problem.”
What this engagement demonstrated is that the gap between a reasonable agentic AI application and the highest-value agentic AI application can be enormous, and the only way to close that gap is through deep customer insight. Not surveys. Not analytics dashboards. Not internal assumptions about what customers want. Deep, qualitative, foundational research that reveals the structures of how people work, what they’re afraid of, and what would actually make their professional lives meaningfully better.
Our client could have shipped their prototypes as-is and gotten decent adoption. The solutions addressed real needs. But “decent adoption” isn’t what you want when you’re making a strategic bet on vertical AI. You want product-market fit that creates competitive moats. You want users who don’t just use your tool but can’t imagine working without it. That only happens when you’re solving the right problem in the right way. That is the purpose of AI decision advisory: ensuring the strategy is right before the engineering begins.
Our Methodology at a Glance
For those who want the structural details, here’s how we organized the research:
Scope: Four discrete studies across two industry verticals (financial services and healthcare), each combining foundational customer insight research with concept evaluation.
Method: 90-minute in-depth interviews with concept evaluation. The foundational portion explored workflows, motivations, pain points, and AI attitudes. The concept evaluation portion assessed the client’s working agentic AI prototypes against real-world needs.
Participants: 24 professionals across both verticals, including different types of users. Participants spanned roles from frontline specialists to senior leadership.
Deliverables: Behavioral personas grounded in Jobs to be Done, journey maps, AI design principles per persona, opportunity maps prioritized by pain severity and frequency, concept evaluation with specific next-step recommendations, cross-vertical design principles scorecard, and a revised product roadmap.
Key Frameworks: Personas and Jobs to be Done, Journey Maps, AI Design Principles, Opportunity Mapping, and Concept Evaluation. Each framework served a distinct purpose in the research, and together they created a comprehensive picture of where the product stood and where it needed to go.
Working with Amplinate
At Amplinate, we believe that the most important thing research can do isn’t confirm what you already believe. It’s reveal what you’re missing.
The best product teams we work with don’t come to us looking for validation. They come looking for the truth about their customers, even when that truth is uncomfortable, even when it means changing course.
This case study demonstrates what AI decision advisory looks like when it’s grounded in deep customer insight: not technology assessments or capability demos, but the strategic architecture that determines whether agentic AI creates value or liability.
If your organization is building agentic AI products and you want to ensure you’re solving the problems that matter most to your customers, we’d love to talk.
Amplinate is a product strategy and AI decision advisory firm operating across 19 countries. We help technology companies build the right products for the right people through foundational customer insight research, concept evaluation, and strategic advisory for AI-driven product development.
amplinate.com | josh@amplinate.com