When I joined Contentstack in 2022 as their first Growth PM, the company had been selling enterprise headless CMS for several years and growing. What it didn't have was a self-serve trial. No in-app onboarding. No behavioral analytics pipeline. No activation milestones. You could not sign up for Contentstack on the website and use it — you had to talk to sales first.
That's not unusual for enterprise SaaS at a certain stage. It becomes a problem when the company decides to add a self-serve motion and nobody has built the infrastructure to support one.
My job was to change that. Over three years, I built the growth engine from scratch: trial funnels, onboarding flows, lifecycle messaging, and the data infrastructure underneath all of it. MAUs grew over 300% during that stretch. Here's how the work actually happened.
What I walked into
Contentstack is an enterprise headless CMS. Developers use it to manage content and deliver it through APIs to websites, apps, whatever. The platform is powerful, but in 2022 it was entirely sales-led. If you wanted to try the product, you talked to someone first.
There was a rough sketch of an onboarding tour on a whiteboard somewhere, but nothing had been implemented. No activation metrics. No conversion funnels. No lifecycle messaging. The whole user journey was a blank page.
Oh, and there were four data centers — fully isolated, bespoke environments. Each one had its own user database, completely disconnected from the others. The same email address could exist independently on different regions. By the time I left, there were seven. Every system I built had to account for that fragmentation.
Wiring together the data layer
Before I could build anything for users, I had to build the infrastructure that would tell me what users were doing. This meant connecting:
Contentstack's internal user database (for plan and entitlement data)
Salesforce (CRM, sales motions)
Marketo (drip campaigns, nurture sequences)
Heap (product analytics, the foundation for personalization and cohorting)
An internal ESP (user validation)
Snowflake (data lake, also used for cohorting)
Getting these systems talking to each other across multiple isolated data centers took real coordination. It wasn't glamorous work, but without it none of the onboarding or messaging could be targeted or measured.
Segmenting the trial experience
Not all trial users are the same, and treating them the same is a fast way to lose most of them. We built separate intake channels, each with its own nurture funnel, data attribution, onboarding flow, and messaging:
Enterprise technical users (developers — our primary initial focus)
Enterprise content users (non-technical, content editors)
Single users (freelancers, individual devs)
Partner trials
AWS and other marketplace trials
The principle was simple, and it held up across every experiment we ran: the more targeted the audience, the more relevant the message, the better the conversion. Our best-performing messages — highly targeted, delivered at the right moment in a session — converted at around 20%. The broad, untargeted ones converted at 1-2%. That 10x gap was the whole argument for segmentation.
This tracks with what MarTech found when studying behavioral vs. demographic targeting — within the same demographic group, nearly 90% of people behave differently from one another. Demographics tell you who someone is. Behavior tells you what they're actually trying to do.
Defining activation (and why user interviews didn't help here)
Because I owned very top-of-funnel — the first minutes and hours of a trial — traditional user interviews weren't a useful signal. By the time someone could articulate what they liked or didn't, they were well past the part of the journey I was trying to fix.
So instead of interviews, I defined TTV milestones: specific in-product actions, segmented by channel, that we tried to usher users toward. For a technical developer, activation looked something like:
Clone the starter repository
Install dependencies
Create a Stack
Create a Delivery Token
Set up environment variables
Turn on Live Preview
That sequence became the template for what Contentstack now calls Kickstarts — the guided getting-started flows that ship with the product today.
The Help Center: diagnosing the real drop-off
The biggest single improvement came from the in-app Help Center, and it started with a pattern in the data.
We had a standard onboarding checklist — about five tasks to get a new user oriented. When I looked at session data, I noticed consistent gaps: 1-5 minutes of zero GUI activity, then the user would come back. This happened over and over.
What was going on? After doing some deep diving into session replays, we concluded that users were leaving the app to find help. They'd hit a step they didn't understand, open a new tab, go to the docs site or the learning portal, try to find what they needed, and come back. Every one of those context switches was a chance to lose them.
Before the Help Center existed, users who needed help had two options: submit a support ticket or use Stacky, a homegrown chatbot that wasn't well received.
So we built the Help Center directly into the app — contextual help, surfaced where users actually needed it, without leaving the product. The results:
30% more session stickiness — users interacting more within sessions
10% lift in onboarding funnel conversion
30%+ decline in low-tier support tickets from in-app users
That last number mattered a lot internally. It meant the Help Center wasn't just improving the user experience — it was reducing load on the support team.
Experimentation: timing mattered as much as content
We ran a lot of A/B tests across in-app messages, guided tours, and nudges. The most useful finding wasn't about any single message — it was about when to show up.
We tested whether it was better to surface a message at the start of a session, when the user is less in-context, or deeper into a session at a more relevant moment. Context won, consistently. A message shown when someone is actively working on a task related to that message outperformed the same message shown at session start.
This shaped everything we did afterward: we stopped frontloading messages and started triggering them based on where users were in their workflow.
The onboarding playbook — and where it broke down
As the onboarding frameworks matured, I built them to be reusable. Other product teams at Contentstack — the ones building features like Automate, Launch, and Lytics — could spin up onboarding and messaging for their own features using the same patterns and tooling.
The idea was sound: the feature teams knew their products better than I did, so they should own the messaging content while using the infrastructure I'd built. Adoption of those new features ended up in the 10-15% range. Not great.
But here's what I learned from digging into the data: the gap wasn't in onboarding. The messaging was reaching users. The flows were being entered. The problem was upstream — a disconnect between what the product teams had built and what users actually needed. The features themselves hadn't found their fit yet.
That was a hard thing to surface, and not a gap I could bridge from the growth side. But being able to separate "the onboarding isn't working" from "the product isn't resonating" — and show the data behind that distinction — was one of the more valuable things I did in the role.
Tooling decisions
We started with Appcues for in-app messaging and onboarding. It worked fine for the basics, but after about a year I felt we'd outgrown it. The targeting and personalization we needed — across segmented channels, multiple data centers, and different user personas — required more complexity than Appcues could handle. We switched to a product called CommandBar, that Amplitude eventually acquired, which gave us the flexibility to build the kinds of targeted, context-aware flows I described above.
What I'd carry forward
Three years of building this from zero taught me a few things I keep coming back to:
Instrument before you optimize. I spent the first few months wiring data systems together before touching a single user flow. That felt slow at the time. It was the best decision I made.
Context switches kill onboarding. The Help Center data — those 1-5 minute gaps — told me more about what was wrong than any survey could have. If your users are leaving the product to figure out how to use the product, you have a design problem, not a documentation problem.
Separate onboarding failure from product failure. When Automate/Launch/Lytics had low adoption, it would have been easy to blame the onboarding. The data told a different story. Being willing to surface that distinction, even when it's uncomfortable, is half the job.
Targeted always beats broad. 20% conversion vs. 2% on the same type of message, just with different targeting. The math is clear every time.