2024-10-10
Develop new software: building an effective MVP
Let's discuss Minimum Viable Product (MVP) from the point of view of a Software Engineer. We'll cover when to build an MVP instead of the full idea, how to build it, who to involve, and some common traps I've encountered and how to avoid them.
Building an MVP is a great tool to de-risk a complex project early on. If done right it’s cheap and help estimating the complexity of the project. Conversely, if done incorrectly could lead to wasting a lot of time obsessing on the wrong problems and not make any meaningful step towards proving the idea. For example, we may mistake an underwhelming market response due to an excessively buggy initial prototype for a lack of interest in the product, while building a simpler UI could have provided much more signal.
Why building an MVP
There are many reasons for building an MVP. Firstly, we might want to convince ourselves that a specific technical approach is feasible and have a better idea of how much time it will take. This helps us either increase or decrease our confidence in the proposal.
The second reason is to persuade others. We might want to prove to investors that our team is capable of delivering results with a small demo. We may also reassure the product manager that a specific implementation is achievable within a reasonable timeframe. Additionally, we might aim to convince company leadership that we can quickly move from idea to market and create a new profitable business line.
While building the MVP we can try multiple technical approaches. For example, we may test a third party library VS building ourselves. It could also involve trying a new team dynamic, like integrating machine learning researchers, designers, and full-stack engineers into the same team.
I organize projects on a spectrum. The Moonshot is a highly technically challenging piece of software with many unknown unknowns. In this case, we should prioritize uncovering problems early on in the MVP over the quality or scalability of the initial prototype. The Execution Heavy project has a much clearer goal from the beginning. For example when we’re implementing something that already exists, but we are convinced our implementation can be much better. We should prioritize execution speed In the MVP and the final quality of the prototype, especially if we're aiming to create something 10x better than the competition.
Next, let's consider who should be involved in building the MVP.
Who to build it with
I often build MVPs on my own. They are the quickest to create because the goal is clear to me and don't need to coordinate with others. I mainly use MVPs to clarify my thoughts and occasionally to present a complex idea to colleagues and leadership. MVPs are excellent tools for convincing others that my idea is a worthwhile investment.
We might want to build MVPs with friends for fun during hackathons. Creating MVPs can help strengthen relationships between people. I find it extremely satisfying to prove whether an idea works or not with a team. Even if the MVP proves that the idea is not worth to pursue further, I have a much better understanding of the challenges that need to be solved in order to make it a reality.
I have found through experience that a team of 2 to 4 highly technical people works best. All discussions stay focused on problems, and everyone understands what others are working on.
Next, let’s take a look how to build a successful MVP.
How to build it
Depending on why you're building the MVP and who you're working with, you may adopt different strategies. If you're building it for yourself, it's crucial to implement just enough to confirm that the approach is solid. This involves uncovering all the unknown unknowns, which are problems you didn't know existed, and addressing most of the known unknowns, which are problems you're aware of from the beginning, but don’t know how to solve.
If we're building to validate a market hypothesis, it's essential to polish the prototype enough to gather valuable market signals about the idea, rather than feedback on a poor UI. You should limit the features to the minimum necessary to prove or disprove a hypothesis agreed upon before starting.
If we're building a take-home project for a job interview, we should implement one or two complex features to showcase our skills. We should leave plenty of opportunities to discuss how some non-implemented features could enhance the MVP. I personally prioritize polishing code and architecture over implementing rich functionalities, I implement something I can present in a little more than 30 minutes.
If we're building a Moonshot MVP, the focus is on quick iteration and sometimes implementing different solutions to the same problem multiple times. High-level automated tests are very helpful here. For example, integration tests ensure we're not introducing regressions without being strict about the interfaces between components that can change multiple times before completing the MVP.
If we’ve building an Execution Heavy MVP, we want to focus on team velocity. We want to figure out who are the best people to leverage for the well-defined tasks and how to effectively communicate with each other. Since it’s clear what to build, can be convenient to specify interfaces between components and unit tests to enforce them.
Next let’s take a look at traps I've felt into in the past and how to avoid them.
Traps to avoid
Scope creep is the worst enemy of an MVP. Adding unplanned features will delay the launch and dilute the signal. If the launch fails only due to a few missing features, it was doomed from the start. I recommend being ruthless about which features to exclude from the MVP and instead implementing them as "fast follows" after a successful launch. My theory is that as engineers we have a lot of ideas and convince ourselves we can't launch without all the features that come into mind while developing. We should resist this urge.
This happened to me recently while developing an AI assistant at the SPC hackathon in 2024. I added the many unnecessary features like crossing questions from the public when the presenter answered. It added a lot of complexity and didn't contribute much to the demo. In fact, it even failed on stage, during the presentation.
Beware of MVPs that seem valuable in limited scenarios but won't scale. A demo might work well with cherry-picked inputs but struggle with all real data. For example, consider an LLM summarization of books: if the summary of an obscure book contains a hallucinated sentence 5% of the time, users will quickly find many of such inputs and speak negatively on social media. This rapidly erodes trust in the product after the MVP is unveiled. This can be hard to avoid without a lot of experience and intuition in the field. My suggestion is to not try to cherry-pick data for the demo and honestly present the MVP with all its limitations.
Another trap is not accepting that the MVP can fail. Not all tools need to exist. Sometimes the idea is good, but it's either too early, too late, or there are better solutions available. The implementation might also have major issues. Abandon MVPs that fail to gain traction. MVPs are cost-effective only if you cut your losses early.
Building too slowly is another mistake. We should find some sort of unfair advantage when creating an MVP. Perhaps a new technology makes a previously difficult problem 10x faster, turning it into low-hanging fruit. Sometimes it's new libraries or developer tools that speed up the process. Other times, it's adding the right people with energy and a positive attitude to the team and spending many hours together. Whatever the "unfair advantage" is, if you don't find it and the MVP seems to be taking forever, it's better to cut your losses.
Conclusion
In this post we have seen two different type of MVPs: Moonshots and Execution Heavy. We discussed the differences and what to focus on.
Building MVPs is more art than science, we get better with practice. I often build MVPs of systems I read about in papers or blog posts to practice this myself. I really think it’s a powerful tool in software development. Especially now that implementing multiple solutions to the same problem is becoming exponentially faster with AI assisted code development (e.g. Copilot or Cursor).