Becoming product-led: assessment of my practices against 3 approaches from Pendo
This year I visited the biggest product management conference — Product Con in San Francisco, organized by Product School. After 2 years of pandemic and everything being online, it was nice to meet with fellow product managers offline — in reality!
One of the key topics of the event was product-led growth: in fact, 20% of talks and panel discussions were about it. So the buzz was real :)
For further dive into the topic, Gainsight and Pendo distributed their books on the topic:
I read both books and decided to reflect on my experience and document takeaways I picked up to become a better product manager. We’ll start with The Product-Led Organization (272 pages) by Todd Olson at Pendo. Keep reading to get cherry picked insights ;)
By the way, in the article I will also share a useful BONUS that will help you and your organisation become product-led!
Numbers don’t lie
First, I’d like to share some interesting data points from the book. Those insights might inspire you to read the book and extract your own takeaways to accelerate your product and career:
- Nearly 84% of projects will either fail or go over budget.
- Only 12% of software is ever used. Based on Pendo research, some $29.5 billion was invested in features that were never used.
- 80% of features are rarely or never used.
- 86% of consumers are willing to pay more for an upgraded experience, and 55% are willing to pay for a guaranteed good experience.
- On average, publicly-traded cloud companies spend 21% of revenue annually on R&D.
- The average tenure of an employee these days is just 18 months.
- Without a feedback database, product managers spend about 20–25% of their time just organizing feedback from inside and outside the organization.
- Studies have found that only 5% of companies reliably respond to customer feedback.
- Studies have found that product-led companies exceed peers in profit margins by 527%.
So what is product led organization?
Product-Led organization — a business that makes its products the vehicle for acquiring and retaining customers, driving growth and influencing organizational priorities. They put the product experience at the very center of everything they do.
— The Product-Led Organization by Todd Olson
Pendo worked with thousands such organisations and they identified six core characteristics that product-led companies have in common:
While reading the book, it was quite pleasant for me to realise that I’ve been intuitively using majority of the practices described. But I also was excited about the enhancements I can incorporate in my product management craft. I will describe three takeaways in this article.
Structure of the article
For this article I will be reflecting on my product management experience. I will be sharing some of the processes or approaches I used in my work (as is), followed by some insights I picked up from the book (to be). I will be wrapping up with my thoughts on how I can integrate that into my practice(will I adopt that?).
Let’s get started! :)
Takeaway 1: Roadmap Enhancements
As Is:
Below is the roadmap that I created in the past for the organization I worked for:
* Due to NDA the data is removed, but you can understand the approach.
Some notes:
- The roadmap is based on various research (competitors, market, users etc.), the vision and OKRs documents.
- The roadmap was created for 12 months. We usually do that in October, so that resourcing can be planned. For the current quarter, the initiatives were the clearest, and we were committed to delivering them. For the following quarter and later in the year, the initiatives were vaguer, and the plan of action was updated as per feedback from users, insights from the market and other research.
- Objectives or themes are from the OKRs (Objectives and Key Results) document. Typically the themes were selected for the year and revisited quarterly. Thus the roadmap is tied to the separate OKR document.
How I created alignment in the team:
Monthly we had an alignment meeting with the leadership team, the development team and CS team. There we discussed:
- progress on the roadmap and potential changes,
- initiatives planned for the upcoming month,
- reviewed key business and operation metrics.
I also had a separate meeting with partners who we had integrations with to learn about their plans and share the next steps for our product. Thus we could ensure that new system changes are anticipated and no negative impact is created.
Improvement 1: Assign specific metrics to each item in your roadmap
Initiatives should be associated with specific goals for both business and usage. The KPIs should be set ahead of time and integrated within the roadmap itself.
— The Product-Led Organization by Todd Olson
As Is:
This shouldn’t be confused with the Key Results from the OKRs document. In my experience the KRs were a bit higher level and didn’t tie to the specific feature we developed, let’s decode below.
For all of the initiatives that our team developed, I specified the metrics to track in PRD documents — like how many users engage with the given feature and task success (i.e. following all steps to get to the value of the feature). Then development team added needed hooks in the code and I was able to add those metrics to the product metrics dashboard in Amplitude (one of the tools we used to track user behavior). Everybody on the team could see the adoption of new features and other metrics there.
To Be:
From the start, include usage goals like “this feature should be used by X% of users in 30 days”. This way, it’ll be possible to develop a strategy to raise user awareness of the update, as well as more targeted user training (e.g. specific user persona the feature will benefit). Additionally, it’ll help to validate whether the prioritized initiative yields the expected impact.
Will I adopt that?
I’m adding this process update to my product manager toolbox! I think it’ll be relevant in a PRD document for initiatives. This will also help to understand users’ perception of the added value from implicit behavior — usage!
Improvement 2: Multiple roadmaps for different audiences
I am a bit skeptical here. I think this might create additional overhead for a product manager unless the roadmaps represent different visualisation of the same data and automatically update as changes are applied to the data set.
However, it makes sense that different level of granularity is needed for different audiences:
- A leadership team is interested in a more high-level overview without small nuances — that was the focus of my roadmap.
- Marketing and sales teams are more interested in the functional value and use-cases to include in their demo decks etc.
- Support teams are interested in the product areas that are to be updated and linked documents for them to successfully help customers who face challenges.
- Partners who had integrations need more context to anticipate the impact on the collaboration.
- Users want to know if their feature will be delivered.
- …
Note:
The challenge brought up in the book that multiple roadmaps can address is the following:
“Too often roadmaps are shared without any of this [“why” behind the priorities] explanation or reasoning.”
— The Product-Led Organization by Todd Olson
The way I was solving the problem of communicating “the whys” is through alignment meetings with different stakeholders: leadership + my team (designers & engineers), partners, etc.
And for users, the roadmap for the quarter was publicly shared.
Will I adopt that?
A lot depends on the tool used for the roadmapping purposes. Therefore if the capability for different visualisation of the same data is supported, this might be a great opportunity to help different departments working on product’s success. I also view it quite convenient, if I can alter/update data in one source document and representation is updated automatically in multiple roadmap views.
Improvement 3: Product delivery predictability
“Without predictability around how the team will perform, product leaders risk publishing a roadmap the team can’t rely on.”
— The Product-Led Organization by Todd Olson
As Is:
I barely know any team that doesn’t estimate tasks in some sort of points. In my experience, in the grooming session that I led, the team voted on the complexity points for the initiatives we discussed. Later we used that data to track the team’s velocity and see the dynamics.
Additionally, I tracked progress in JIRA — for initiatives I created different epics and was able to understand the progress made on each epic based on the child issues’ status. Prior to monthly meetings with the stakeholders, I updated the roadmap based on the information in JIRA software.
To Be:
Suggested approach in the book: take the number of story points completed and divide by the number of story points the product team originally committed to. This should be done for each item in the roadmap. The resulting percentages reveal whether the team is behind, on schedule, or ahead of schedule. See the image from Pendo below:
Will I adopt that?
Handy, especially if done automatically!
This could be done from within the JIRA Software — both cloud and data center. On the roadmap the most updated statuses will be displayed automatically, as development team is closing their tasks!
The suggested calculation from the book is a manual approach. However I think this approach stands a chance:
- if automatic methods are not available, i.e. no integration with JIRA software.
- JIRA roadmap capability is not used.
- Other task tracker is utilised that does not offer roadmapping capability or integrations.
With adoption of delivery predictability there could be transparency ahead of the alignment meetings regarding how far we are with each initiative! And it’ll be helpful for marketing and sales teams too since their activities will be informed by any changes to the initial schedule.
Takeaway 2: Enhancing NPS
Collecting customer feedback is critical because it can help add meaning to the behavioral data that is being collected. Additionally, it’s one of the best tools to look for signs of friction or as a resource for new ideas or areas for research.
As Is
I was able to bring some of the improvements to collecting customer sentiment. Initially, the company-wide survey was distributed yearly across the organization via email. As I connected with other product managers at other large companies, it’s a pretty standard approach if talking about collecting feedback about internal tools. There are two problems with that:
- The cadence is too infrequent.
- There’s a potential for a completely irrelevant audience — an audience that might not even use the product.
So as a first step to improve that I connected with a team that could help us with the module to integrate the NPS within the product. This way, our users could leave the feedback right on the spot — if they got a delightful, neutral, or dreadful experience! Below is the in-product pop-up that our users saw:
As a result, each month we’ve received ~100X more customer ratings and ~15X more user feedback than collected yearly via email! Using in-app NPS also provided a higher quality insights as users provided it while being in the context of using the solution.
The challenge that I faced however was the fact that NPS was collected using one solution, but user behavior and user parameters (like geography, gender etc.) was stored in a different tool. Thus NPS score enrichment with that data required additional efforts.
To Be
With “as-is” approach described earlier, I already was collecting feedback from users in-app — so it was from the target audience. A step further into decoding insights from users would be:
- Grouping data by various user parameters.
- Grouping data based on user behavior.
For example, the only grouping that I performed was by user role — in the context of our solution it was either knowledge creator or knowledge consumer. However, it’s possible to dive deeper and try to uncover trends across different user segments:
- Geography
- Novelty to the application — new users vs. seasoned users
- Heavy solution users vs. occasional users
- etc.
It also would be interesting to see whether the score depends on the active usage. I checked Pendo and they provide a basic capability for today’s product management competency — in-app NPS collection , but they also offer a pretty awesome opportunity to slice and dice user scores:
Based on that it’ll be possible to detect trends or formulate and validate hypotheses:
- Maybe users who engage with product more are providing higher scores as they are able to see the value more. In this case, it’s possible to further analyse user flows for this user segment within the application to design on-boarding or in-app messaging to motivate other users that have the same use-cases, but use the solution less actively. Thus this approach could help users with less active usage to realise value from the product.
- Or vise versa — the more users engage with the product, the lower NPS score they provide. In this case, more insights will be required — maybe setting up user interviews to understand where the friction is coming from. There’s a potential that early user engagements with the product created certain expectations from the positioning statements, etc. but with more interaction with the product that expected value is not delivered. Through more in-depth research it’ll be possible to uncover potential problems in the product.
Will I adopt that?
I think there’s no such thing as too much feedback. However, with large user base and in-app opportunity for users to leave feedback, it is challenging to process and decode it. It’s important to ensure that with feedback product manager can also “scan” context — is it the loudest person? Is it paid or free user? is it power user or in-frequent user? etc. Context can empower product teams to be more strategic when making conclusions from user feedback. So again —I’m adding this enhanced approach to my NPS collection practice.
Additional thoughts
Getting additional context applies to other types of research. For example, when we did usability testing together with designer regarding the new design of the platform, I provided her with the list of users that had different usage intensity, novelty to the product, role etc. This way designer could perform moderated usability test and we could gather a more balanced feedback.
Takeaway 3: Self-education for users
In a 2018 survey by Statista Research Department, respondents in USA and worldwide were asked about their opinion on self-service portals provided by brands. They discovered that 88% of respondents from the United States expected brands or organizations to have a self-service support portal. Similar data is provided by Zendesk:
This is no surprise as people around the globe become more tech savvy and get access to the internet. Additionally, thinking back on the impact of COVID-19, social interactions were influenced as well…
As Is
I joined the company during active phase of the COVID-19 pandemic and our mode of work was 100% from home. I needed to learn the product myself and didn’t want to distract the team much with questions.
As it happens quite frequently — it’s challenging to keep up with the product that is constantly evolving and especially for products that exist for decade or more. Thus I discovered FAQs, user guides and other information for users’ assistance was quite outdated. That’s when I realised that we could do a better job with the users’ self-service provisioning. Thus one of the priorities for the team was to improve that. As a result of our efforts, we had multiple channels for user education:
I focused on newly introduced channels:
- Video Tutorials —illustration of the product in action.
- Product Tours — contextual guidance that walks users step-by-step.
- Roadmap — supplementary to users’ education, so users can know how the system will be evolving.
My team members helped to update the existing channels:
- FAQs
- Knowledge base, user guides
Additionally, product team had a tight connection with the support team. Once a month I had a call with the support members to discuss CSAT metrics, as well as the nature of requests that were coming in. This way it was possible to spot the areas that created friction for users for further improvement.
To Be
Thus far, our processes for user self-service looked product-led, but from the book I’ve learnt about the next steps to take. And it is — measuring customer education. Specifically there are three areas to collect data on:
- Engagement with training materials
This approach helps to understand whether users are curious and eager to learn about the service and updates. So far I measured only user engagement with FAQ page (visits), number of views on each video tutorial and was planning to measure how many users went through the product tour (that capability was not yet available by the service we integrated). But there’s so much more:
1) Measure specific FAQ articles read
2) Measure search words used on the FAQ page — this also will help to understand whether the same vocabulary is selected by users and the product.
For example, main page and home page — can be used interchangeably and it’s important to allow user to find help no matter the term that is used. - Support ticket volume
There’s additional twist, since we already had once a month meeting with the support team to review the ticket volume, types of requests (by feature) and CSAT scores. The next step could be segmenting requests to analyse whether they are coming in for new vs. existing features. This way it’s possible to measure the education efforts.
Additional information that could give even more holistic insight is connecting bugs data — which parts of the product have the biggest number of bugs. This is more on the quality of the product, but paired with ticket volume by feature, it can be a useful addition to the analysis. - Long-term retention
Should be considered as important downstream measure of education effectiveness.
Will I adopt that?
For me user education was more like a “to-do” item to be checked off. However, once education materials are posted there should be additional step. It should be treated as essential part of the product — thus measuring its impact should be integral part of the process, just like we measure everything else in and around the product. Since I’m a fan of numbers and visualisation of data — I’ll definitely add that to my PM toolbox!
Conclusion
So these were the three directions I reflected on today. It was a great exercise for me and I hope the readers of this article will find it useful too.
There are more “tips and tricks” that I will incorporate into my work and I’m sure I’ll be returning to the book to help me in my work . Thankfully, I left a lot of sticky notes :)
I highly recommend reading The Product-Led Organization by Todd Olson. It will benefit readers in two ways:
- Refresh and structure your product management knowledge.
- Help you to spot some new frameworks/approaches to incorporate into your work.
BONUS!
For the most enduring readers who made it this far (or for those who scrolled all the way till here), I’d like to share a bonus from Pendo! All of us prefer different formats of information — for me I enjoyed the book, but I also enjoy video format. For those who prefer the latter as well, there is Product-led Certification Course!
Course registration is free for a limited time — sign up here.
Thank you for dedicating the time to check out my article. I hope you found it helpful. Feel free to share your feedback or some other practices that you use to become better in your product management craft in the comments section below.