Often when we talk about how we develop our products we refer to Minimum Viable Products. As a basic tenet of agile and lean development, it allows us to think about developing thin slices and maximizing our agility. While this is a powerful approach, I decided to zoom out and think about the concept a bit more. As I did this, I found this definition of MVP that elicited a few reactions:
Eric Ries, defined an MVP as that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort. This validated learning comes in the form of whether your customers will actually purchase your product.
My first reaction was that this is a bit different than how many, including myself, thought and spoke about it. We tend to focus on the product portion of this definition but skip over the “validated learning”. In many teams, we speak of MVPs as mini products with the assumption that our product would be successful and the purpose of the MVP is to refine the solution to maximize the impact. This misses the critical component of the MVP which is to gather data about the viability of the product itself. This assumption then leads to waiting for user research, market research, and competitor research before development. These are all valuable tools to better understand things but this also sequences the work and risks data paralysis, slowing, and in some cases preventing, the team from moving forward. But what Eric is calling out above is that MVPs are a source of data to iterate on our decisions, not the sequential next steps from those other data generation techniques. We can and should learn by doing and observing.
My second reaction was that focusing solely on product-market fit also may miss some valuable learning and impact from an MVP. There are also learnings about technical feasibility and complexity that should be part of the thinking lest we miss the key learning on whether the product can be built in the ways the users will use it and in a way that is cost effective. Skipping this step, we tend to organize our efforts based on the things we know best, with the idea of derisking development by bootstrapping the development with the things we know, the least risky pieces first. However, if we instead focused on the riskiest pieces first and focus on learning we can derisk the entire MVP, with a fail fast mindset. By not considering the technical feasibility we only learn about the Return and not the Investment side of ROI.
An example of this from my own experiences when I helped build teams to create a two-sided marketplace to empower creators to understand and engage with users. Initially, it was unclear what we needed to build. By embracing the “whether your customers will actually purchase your product” definition from above, we decided to learn by doing and engaging with our users instead of waiting for our research efforts to prove the product viability. This led to engaging our engineering team early and allowing that to be an additional data source, even without knowing where it would end. This was uncomfortable at first as there were no “product requirements” and every sprint led to updated thinking and changes. But by maintaining this flexibility the tech team quickly iterated and produced an alpha in 5 months. Years later, through the learning and efforts of many talented individuals, this product is used by hundreds of thousands of creators to help maximize the success of their careers.
It would be easy to stop here and look at the success of the product as validation of our approach. And I think in many ways it was an example of using the lean MVP approach. However, it is also an example of an opportunity to think about how we could use the MVP as a technical feasibility experiment. Our MVP was focused on maximizing product engagement, but in reality we had two fundamental high risk questions: 1) would users be able to login and 2) was a platform for consumption metrics feasible?
Through the MVP we learned that users could indeed login and engage with the metrics we provided. However, we also learned that the login infrastructure we used would not scale beyond our small Alpha. Furthermore, we learned that computing the metrics was relatively expensive which was not calculated into our ROI. These were invaluable lessons but they could have also been easily missed when they are not part of the primary focus.
Imagine if we had focused on these core learnings from the early stages of development. Instead of focusing solely on adding more potentially engaging features we would have been able to reduce scope, shipped in less time and reduced the time to learn. This would have led to reducing our discovery of the technical issues and the product viability months earlier.
This is a happy example because we had a great product fit, excellent engineering execution, and ample support from the business to make it a success. If we had learned that this was not a viable product we could build, we may have viewed the experience differently. I am using this as a learning to empower the teams I work with to prioritize the riskiest product and technical risks into MVP learnings.
Great points in here! An important reminder to build learning into each and every step forward!