Asking the right "what would happen if..." questions at the right times can help us make better decisions and maximise our impact.
In the 1998 film Sliding Doors, the writers explore two possible realities: in one, the protagonist runs down the stairs to the train platform just in time to squeeze in through the sliding doors before they close; in the other, she narrowly misses her train, the sliding doors slamming shut in front of her.
This seemingly small change sets off two completely different storylines, both of which the audience gets to witness. Since it's a '90s film, the focus of the movie isn’t on the alternate realities so much as the romantic outcome: while it first seems like the protagonist will only find her soulmate in the case where she makes her train, it turns out that true love conquers time, space, and counterfactuals — which I'll loosely define for now as the different potential outcomes of making a different decision or taking a different action. (In this case, the counterfactual action was whether she caught the train.)
Of course, if life was a '90s film, perhaps we wouldn't have to worry so much about counterfactuals. We'd somehow end up in the right reality no matter our actions, dancing to Ace of Bass and living it up poolside with the knowledge that a happily-ever-after was fated for all, one way or another.
In real life, though, counterfactual thinking is a useful tool that can help us make better decisions when contemplating aspects of life that are more within our control than whether we make or miss a train. For example, if you've ever asked yourself "what would have happened if..." questions like the ones below, you were likely exploring the counterfactuals of your choices.
How would my life be different if...
Despite how common it is to imagine what if scenarios like these, most of us likely don’t use counterfactual thinking as much — or as strategically — as we could. And to be clear, this isn’t limited to exploring your own actions and it also doesn’t have to be done retroactively. By better understanding what it is and why it’s useful, we can add this important tool to our arsenal of methods for doing the most good we can.
Let’s begin by defining the term a bit more carefully.
We can understand the concept by splitting the word into two parts: counter (against) and fact (the reality of what has occurred/will occur). For example, let’s say I decide to go swimming. A counterfactual of that choice is not to go swimming.
I can also break that down further. In the counterfactual world where I decide not to go swimming, I might stay indoors all afternoon (one possible counterfactual to going swimming) or I might go hiking (a second possible counterfactual to going swimming).
Let’s also mention a situation that doesn’t have to do with my choices. A counterfactual to Candidate A winning an election, for example, might be that Candidate B wins, that Candidate C wins, or even that a political revolution occurs and the election becomes irrelevant.
In short, a counterfactual of a situation can be thought of as a possible alternative to what happened/will happen that runs counter to what actually occurred/will occur.
Counterfactual thinking
Effective altruism asks the question, “how can we do the most good?” and answering this requires some counterfactual thinking. That’s because, to attempt to estimate how to do the most good, we have to determine whether the outcomes of alternative actions might be better than those we’re currently considering.
Imagine that you know about an organisation that is focused on doing good — for example, your local animal shelter or soup kitchen. You may feel inclined to support that organisation, and that’s a natural reaction. After all, when we know a particular action (in this case, donating money to an organisation you care about) is likely doing some good, continuing to do it probably seems like common sense.
But to have a better shot at maximising our impact, we really shouldn’t stop there. Counterfactual thinking reminds us not to only ask “is this action doing good?” but to ask “does this action do more good than something else I could be doing?” For some examples of how this can play out, check out our comparing charities page!
We can see this way of thinking in several core aspects of effective altruism, including:
When possible, interventions can be tested by examining the differences between groups that don’t receive any intervention (“control groups”) and those that do. This, in essence, explores the counterfactual of what would happen without the intervention, helping determine a) if progress occurs and b) whether there is a causal link between any observed progress and the intervention being tested. In other words, if we implement a program and notice some improvements, how confident can we be that those improvements happened because of the program?
Comparing related interventions helps explore whether the outcome of an alternative intervention (a counterfactual to the one you might be considering) is more impactful. To use the example above, we might ask: is donating to my local animal shelter or to an organisation like The Humane League (which focuses on farm animals) better at reducing animal suffering? According to one estimate, for every $1000 donated to Humane League corporate campaigns, approximately 100,000 farm animal lives are improved. In contrast, only around two cats or dogs can be rescued with a $1000 donation to a typical animal shelter rescue campaign.
The concepts of neglectedness and replaceability help us maximise our personal impact by focusing on causes and careers that truly need us. Determining what those are necessitates exploring a potential counterfactual of not taking a particular action — that someone else might do it anyway!
Let’s look at each of these in more detail.
In his book Doing Good Better, Will MacAskill (a co-founder of effective altruism and Giving What We Can) tells the story of PlayPumps International, a charity that pioneered water pumps that also acted as merry-go-rounds. The idea was — on the surface — genius. The spinning of the pump (caused by children playing on the merry-go-round) would provide clean water for communities in lower-income countries, eliminating the hard work of pumping water manually while simultaneously providing entertainment for the children!
(Stock photo; for a more accurate image of a PlayPump, see the video below.)
It wasn’t until 1,800 PlayPumps had been installed, the charity had won a World Bank Development Marketplace Award, and several celebrities had jumped behind the cause, that independent reviews revealed some major issues. The pumps required constant force to work, which meant children didn’t really enjoy playing on them, so in many cases, the village women ended up having to manually push them instead. Worse, PlayPumps were harder to push and provided five times less water per hour than the old, manual hand pumps!
One of the lessons here is that good intentions and intuitions are often not enough to determine whether an intervention will be effective. That’s why testing interventions against a counterfactual is so important to effective altruism. Instead of assuming a predicted outcome will occur, an organisation can use randomised controlled trials to test their intuitions using a control group, which represents the counterfactual of what would have happened without the intervention (more on this below). If PlayPumps had been tested, it’s likely the well-intentioned founder would have poured his energy and resources into something that really made things better, instead of (arguably) worse.
Asking the question, “what would happen without the intervention?” not only helps eliminate programs that may do more harm than good; it also helps explore a very tricky, and sometimes overlooked part of evaluating programs: causality.
If we observe progress after a particular event or action, it can be tempting to jump to the conclusion that this event/action caused the progress to happen. However, it’s possible that this progress would still have happened without the suspected trigger. Even when progress seems to be related to a particular action — for example, a friend changing their mind on an issue after hearing your arguments — the connection between your arguments and their position change could be as spurious as the apparent correlation between swimming pool drownings and Nicholas Cage films! Perhaps it was a different experience altogether that swayed your friend from their position, and your conversation had very little effect.
In the case of evaluating programs that aim to do good, counterfactual thinking reminds us to be wary of putting too much weight on the progress we might see after an intervention is implemented, since observation alone is likely not enough to determine if the progress we’re seeing would have happened anyway. Randomised controlled trials (RCTs), however, provide a much stronger basis for determining cause. The charity evaluator GiveWell explains that since RCTs allow direct comparison between a treatment group (in which the intervention is implemented) and a control group (in which no intervention is implemented), “it is generally presumed that any sufficiently large differences that emerge between the treatment and control groups were caused by the program.”
While many donors are content to fund any cause that’s doing good, those interested in effective giving believe we can maximise our impact by comparing charities and programs to find and fund those that do the most good. Since there are so many problems in the world, and so much suffering, it stands to reason that — similar to triaging at an emergency room — we’d want to put our energies where they’ll have the biggest impact. Donors who prefer this approach are, in essence, considering the counterfactuals of their choices. For example, they might think something like: it’s true that by donating to most organisations, I will likely be doing some good. But what would happen if I made a different choice? Are there alternative choices that would allow me to do more good? (Our comparing charities page explores this topic in more depth.)
There’s another useful counterfactual to consider when making decisions: namely, that someone else may take the action you’re considering even if you don’t take it yourself. In other words, what are the second-order effects (the consequences of the consequences) of your decision?)
We can better understand this by discussing one of the key criteria effective altruists use for determining which cause areas to work on: neglectedness.
It’s tempting to get involved in the causes we hear about the most — the latest disaster dominating the media or the societal issues that most frequently come up in conversation. This results, though, in some causes having a “funding overhang,” meaning that there is more funding available than can be efficiently used — the infrastructure for distributing resources lags behind the accumulated resources. In other words, an organisation is not as equipped to efficiently translate the influx of funding into tangible progress and cost-effective programs right away, increasing the chances that your money is waiting in a backlog instead of making an immediate difference. In this case, continuing to flood the cause with funds is likely not as useful (though it still may do good in certain cases) while sending physical goods, in some cases, can even do harm.1 Considering whether a cause is neglected is, in essence, exploring the counterfactual: what would others do if I didn’t work on/fund this particular cause? This can help us increase our impact by focusing on causes that aren’t getting much attention, and as such, may be more in need of resources.
Counterfactual thinking can also be used in the context of career decisions — to determine what would happen if you didn’t take a particular role. Would someone else better suited to the role take it on instead, thereby creating more impact that you could have? The organisation 80,000 Hours, which explores how to choose an impactful career, warns that this concept, termed “replaceability,” can be taken too far; however, it can help highlight the importance of fit when deciding where to place your energies.
Whether you’re new to counterfactual thinking or already consider yourself a pro, I hope this article helped illuminate a few of the ways exploring counterfactuals can help us maximise our impact. While there will always be uncertainty when exploring hypotheticals, using stronger reasoning and putting more weight behind established tools like RCTs is certainly better than crossing our fingers and hoping for a 90s movie happily-ever-after! By asking the right “what would happen if…” questions at the right times, we can be more confident that we’re putting our funds and energies into causes, careers, and charities where we’re likely doing more good than something else we might do instead.
Footnotes
1. A 2006 Disease Control Priorities report had this to say about international disaster relief efforts: “The international community, eager to demonstrate its solidarity or to exercise its ‘right of humanitarian intervention,’ undertakes its own relief effort on the basis of the belief that local health services are unwilling or unable to respond. Donations of useless medical supplies and medicines and the belated arrival of medical or fact-finding teams add to the stress of local staff members who may be personally affected by the disaster. The cultural disregard of the humanitarian community to cost-effective approaches in times of disaster and the tendency to base decisions on perceptions and myths rather than on facts and lessons learned in past disasters contribute to making disaster relief one of the least cost-effective health activities.” Disease Control Priorities in Developing Countries, Second Edition (DCP2), The World Bank, Chapter 61 , Natural Disaster Mitigation and Relief, Claude de Ville de Goyet, Ricardo Zapata Marti and Claudio Osorio, 2006.