Case Study — 02
Shipping a conversational AI assistant inside a B2B platform with 10M+ users
Building AI features inside an enterprise B2B platform is a fundamentally different problem than building a greenfield AI product. The constraints are real: legacy data models, enterprise security requirements, a diverse user base with varying technical literacy, and a sales team that needs to explain it on calls. This is how we navigated that.
Context
A platform with $93M in revenue and a user base that spans from solo agencies to enterprise teams
Vendasta's platform serves a wide range of users — from individual digital agency owners managing 20 clients to enterprise operations teams running thousands of accounts. That user diversity is a strength commercially and a significant challenge for AI product design.
When we began exploring where generative AI could add meaningful value, the options were numerous. The question was how to ship something that would work across that range of users, integrate safely with existing data models, and not create new support burden. We needed to be deliberate.
Problem
Workflow complexity was the core friction. AI was the right lever — if applied correctly.
Through user research and support ticket analysis, a consistent pattern emerged: users were spending disproportionate time on repetitive, context-intensive tasks — drafting client reports, creating campaign briefs, generating performance summaries. The platform had the data; users were doing the synthesis manually.
"The platform had the data. Users were doing the synthesis manually. That gap is where AI belongs."
Approach
Custom data grounding, security first, then user experience
The AI assistant couldn't just be a generic LLM wrapper. For it to be useful on a B2B platform, responses needed to be grounded in platform-specific data: account performance metrics, campaign history, client context, product usage signals. We built the data grounding layer before we designed the interface.
Data inventory and access scoping
Mapped which platform data was safe to surface to which user types. Multi-tenant architecture means one agency's data cannot leak into another's AI context — this had to be structural, not just a policy. Worked with security and engineering to define the access model before writing any prompts.
Use case prioritization by automation ROI
Ranked potential AI use cases by: time currently spent by users, feasibility of grounding responses in platform data, and risk of incorrect output. Performance report drafting scored highest on all three. Started there.
Built and launched the conversational assistant
Used generative AI with custom-grounded platform data to power the assistant. Rolled out to a beta cohort of 500 accounts, monitored output quality, gathered structured feedback, iterated on prompt architecture and response formatting before broader release.
Outcome
Improved engagement, faster workflows, and measurable adoption across the platform
The conversational AI assistant improved workflow efficiency and engagement metrics across the platform. Users who adopted the assistant generated reports and briefs faster, with higher satisfaction scores in post-task surveys. Automation of repetitive synthesis tasks reduced support tickets related to reporting complexity.
What I Learned
Enterprise AI is 40% capability, 60% trust architecture.
The hardest part of shipping AI inside an enterprise platform isn't building the AI — it's building the infrastructure that makes the AI safe to deploy. Multi-tenant data isolation, access controls, output quality thresholds, and escalation paths for uncertain responses all have to be designed before the user experience.
The second lesson: shipping AI on a mature platform means inheriting its complexity. The data access model, the tenant isolation architecture, the existing user mental models — all of it constrains what you can build and how fast you can move. Understanding those constraints early is what lets you move confidently later.