Companies Image
The Largest Product Job Board

What WWII Bombers Teach Us About Building Resilient Products

Learn how WWII bombers reveal insights about survivorship bias and how avoiding this bias helps product managers build more resilient and successful products.

TL;DR

Survivorship bias occurs when only successful outcomes are analysed, ignoring those that failed. This can cause  misleading conclusions, overconfidence, and poor product management decisions. For product managers, this bias often impacts areas like product-market fit, customer feedback, and feature success evaluation, skewing perceptions of what works leading to misguided strategies.

To avoid survivorship bias, always consider both successes and failures, seek out missing data, and analyse why some users or features failed. Doing so will provide a more complete understanding of product performance, support smarter decision-making, and help build more resilient products. Avoiding survivorship bias means better product-market fit, more effective retention strategies, and improved feature prioritisation, ultimately leading to more successful outcomes for your product.

Introduction

For the modern product manager, a keen understanding of data and metrics is seen as a non-negotiable skill. Regardless of position seniority, green-horn through to veterans, the ability to craft detailed insights and recommendations for services, products, or features from customer usage data will always remain a highly desirable trait.

The point isn’t to cast fear and judgement over those only just falling in love with data rather, the opposite. As like most, if not all, skills, through continued practice and attention to detail it is indeed possible for anyone to progress from rookie to master. This article kick-starts a new series of unique lessons for Product Managers derived from inspiring real stories from across history and the world, giving us a deep insight into what truly matters when developing products. These are lessons that can be applied on a daily basis and truly move us from 0 to 1 in both product thinking and execution.

‍A Statistician Protecting WWII Bombers

Bombers were a type of aircraft used to destroy enemy infrastructure, supply lines, and military targets, weakening the enemy’s overall morale and ability to fight. These planes could fly deep into enemy territory, hitting targets other methods simply couldn’t reach. Through continuously disrupting supply chains and production capabilities, bombers contributed significantly to the overall success of military operations and helped shift the momentum of the war. However, with this ability to fly over long distances, into heavily guarded territory the prevailing issue became keeping the planes, their cargo, and pilots safe. The question remained: how could they protect their bombers from being shot down by enemy fire?

How to Protect a WWII Bomber?

To solve this problem, military strategists turned to a group of brilliant minds, including the statistician Abraham Wald, and set the task of analysing the damage sustained by aircraft returning from missions in the hopes of developing methods to keep those aircraft out of danger.

The team began meticulously examining every bomber returning from missions, mapping where they had been shot and the extent of the damage that had been caused. Over time, the patterns seemed clear: the wings, fuselage, and tail were all riddled with damage. The data and conclusion was obvious, add extra armour to these spots on the bombers to keep more planes in the sky.

Looking Beyond the Obvious

Assessing this data, the team came to what can be considered a very natural hypothesis - “bombers get shot in three primary locations, therefore we must ensure those locations are most protected to avoid further losses”. However, Wald saw something completely missed by the rest of the team. Looking beyond the obvious, beyond the horror of damage bestowed upon these aircrafts which captivated everyone else's focus and attention. He realised that the planes being studied were only ones which had survived and made it back to the airbase; these planes had managed to return, despite the damage they had sustained. In fact the bullet holes observed were in places where a bomber could be shot and still remain functional! Wald’s real insight came from understanding what they didn’t see, what they couldn’t have ever seen through planes which didn’t make it home.


Reinforcing the Weak Points

Wald deduced that the areas without bullet holes—the engine, the cockpit, the critical structural points—were the ones that needed reinforcement. Planes hit in those areas weren’t making it home. By focusing on the absences, the gaps in the data, Wald realised that these were the vulnerabilities that needed to be addressed. His recommendation was to armour the parts of the planes that came back unscathed, precisely because the planes hit there hadn’t survived to tell the tale. Bombers began being reinforced in those areas recommended by Wald and resultantly, more planes made it home after their missions, furthering the allies war efforts.

Understanding Survivorship Bias

What is Survivorship Bias

Survivorship bias is a form of selection bias occurring when an analysis focuses solely on individuals, groups, or entities which have “survived” as a selection process, paired with simultaneously overlooking those which did not. Ultimately, this bias can lead to a systemic skewing of results, as only the characteristics of successful entities are considered while ignoring potential insights gained from those which failed or were excluded. Survivorship bias is particularly problematic because it can lead to erroneous  or misled conclusions about the factors contributing to the success or failure of a process, based on incomplete data. 

More formally, survivorship bias can be defined as an error that results from concentrating on the subset of data that has passed some form of selection criteria while ignoring data that has been removed due to the same criteria. Survivorship bias often leads to overestimation of the success rate of a system, strategy, or process and can distort conclusions regarding performance, risk, or characteristics of a group.


How does Survivorship Bias Apply to Product Management?

For product managers, understanding the risk of survivorship bias and the fact it can significantly impact product and feature development through skewed decision-making and lead to misguided conclusions about product success and customer needs is essential.

Unfortunately, survivorship bias can occur at any stage during the product development and launch process, meaning that for Product Managers it is something which should be considered when questioning data validity prior to any critical decision being made. That being said, here are four places where survivorship bias is most commonly found (and yes we did consider areas with potentially missing data):

  • Product-Market Fit
    Product managers might conclude that they have achieved product-market fit if the product resonates well with the existing user base and achieves growth milestones. However, if they ignore the segment of the market that never adopted the product or left early in the customer journey, they may fail to realise that the product has significant gaps. Survivorship bias here can result in a false sense of confidence about market fit, missing out on crucial feedback from those who chose competitors or had unmet needs.
  • Customer Feedback Analysis
    Product managers often rely heavily on customer feedback to improve products. If they only consider feedback from active or satisfied users—those who "survived"—they risk ignoring valuable insights from users who abandoned the product or churned. These lost users can provide essential information about missing features, usability issues, or areas of dissatisfaction. By focusing solely on the surviving users, a product manager may overestimate the product's appeal and fail to understand critical flaws that are driving churn.
  • Feature Success Evaluation
    When evaluating the success of a newly introduced feature, product managers may focus only on how the feature is used by existing customers. Survivorship bias can occur if they ignore customers who left after the feature was introduced. For example, they might incorrectly assume that the feature is a success because active users are engaging with it, but fail to see that the feature caused confusion or dissatisfaction among other users who stopped using the product.
  • Retention Analysis
    In retention analysis, survivorship bias can emerge if product managers only study users who have stayed with the product for a long time. These "surviving" users might represent a particular type of user that finds the product highly beneficial, but those who dropped off early are ignored. This can lead to an overestimation of product retention rates or user satisfaction, without understanding the key drop-off points that prevent wider adoption.

Survivorship bias in product management can lead to overconfidence in a product's success or the effectiveness of features, because of a narrow focus on users who remained engaged or performed desired actions. To mitigate survivorship bias, Product Managers need to intentionally gather and analyse feedback from churned users, consider the perspectives of those who abandoned the product, and examine why certain segments were unsuccessful. Including the "missing data" helps create a more complete understanding of product performance, informs smarter feature prioritisation, and enables better decision-making for product success.

Examples of Survivorship Bias in the Modern Day

As observed with our inspirational WWII Bomber story, once highlighted, the effects of survivorship bias seem obvious and almost idiotic to have missed. Yet, modern day examples of survivorship bias can still be found and again, once highlighted seem apparent from the outset.

  • Machine Learning:
    In machine learning, models that depend on labelled training data survivorship bias can occur if the training set is non-representative due to only considering data that survived some selection criteria(s). For instance, if a dataset only includes records from successful outcomes, the resulting model will be biassed toward features present in those outcomes, potentially ignoring important features that are critical to failure cases.
  • Failure Analysis in Software Product Engineering:
    When analysing software reliability or bug frequency, focusing solely on modules or features that have not yet exhibited failures can create a false sense of reliability. If you exclude the modules that experienced crashes or defects, the analysis will be biassed, leading to an overestimation of software stability. This survivorship bias can misinform decision-making about quality assurance priorities, as it ignores the data from failed or problematic components that are critical to understanding overall system robustness.
  • Investment Performance Analysis:
    In finance, survivorship bias is often seen in mutual fund performance studies. Studies that look only at funds that are still active ignore those that have closed or merged due to poor performance. As a result, the average performance appears better because only successful funds remain in the analysis pool, creating a bias that overstates returns.

Key Takeaways

Survivorship bias occurs when an analysis focuses solely on successful or surviving cases while ignoring those which failed or dropped out. This can lead to misleading conclusions since the dataset is incomplete. Relying solely on these survivors and the conclusions they draw often causes a false sense of reliability, success, or stability because the critical data points (those disproving a theory) are unknowingly excluded. As a product manager, these conclusions cause poor decision-making and missed opportunities for improvement. To avoid these biases product managers can always seek to include both successes and failures in an analysis, looking for what’s missing from a dataset prior to use, and comparing both successful and unsuccessful outcomes to gain a fuller perspective. Survivorship bias is seen across sectors and as a result should be something commonly reviewed, especially during critical phases when key business and strategy decisions are being made.

Frequently Asked Questions

What is Survivorship Bias?

Survivorship bias is a type of selection bias that occurs when only the surviving or successful cases are analysed, while those that failed are ignored. This leads to a misrepresentation of reality and potentially missed opportunities.

Why is Survivorship Bias a Problem?

Survivorship bias is problematic because it distorts our perception of success or performance by excluding failures or less successful cases. This can result in incorrect conclusions, overconfidence, and poor decision-making.

How Does Survivorship Bias Impact Product Management?

In product management, survivorship bias can lead to an incomplete understanding of customer needs, feature success, or product-market fit. If only satisfied users are considered, the pain points of churned users are ignored, which can lead to stagnation or product issues that go unresolved.

How Can I Identify Survivorship Bias in My Analysis?

To spot survivorship bias, ask yourself what data might be missing, consider whether you are focusing solely on successful outcomes, and compare the success stories with failures to understand the differences.

What are Some Strategies to Mitigate Survivorship Bias?

Include All Data: Ensure both successful and unsuccessful cases are included in your analysis.

Analyse the Drop-Off Points: Examine where and why users or entities dropped off, instead of just focusing on those that completed the journey.

Challenge Your Assumptions: Always question whether success can genuinely be attributed to certain traits, or if you’re seeing survivorship.

Can Survivorship Bias Affect A/B Testing Results?

Yes. If an A/B test only considers users who converted and ignores those who did not, the results will be skewed. It’s crucial to evaluate the performance across both converters and non-converters to understand the true impact of the changes.