Before 2020, very few of us spent much time worrying about pandemics. Now, we have all experienced — or are still experiencing — what it's like to have a pandemic affect our daily lives. As of November 2021, COVID-19 has killed over 5 million people and destroyed tens of trillions of dollars of economic value.
Despite how terrible COVID-19 has been for human health and the world economy, it's possible that a future pandemic could be even more devastating. This is why we think preparing for pandemics is among the best ways we can improve the long-term future.
Biosecurity, broadly defined, is a set of methods designed to protect populations against harmful biological or biochemical substances. This could refer to a wide range of biological risks, but this page is specifically focused on biosecurity that reduces global catastrophic biological risks (GCBRs). A GCBR is a biological event of unprecedented scale that poses a threat to humanity's survival or its long-term potential, such as a pandemic that kills a sizable fraction of the world's population.
Ensuring we are prepared for future pandemics could be a matter of life and death for humanity, making biosecurity a cause with an extremely large scale. Worryingly, much of this cause is neglected and there is significantly more we should be doing — especially as there are potentially tractable solutions we could pursue to make us safer. We therefore think biosecurity is a high-priority cause in which your support could make a significant difference.
There are a number of different types of GCBRs. At the broadest level, we distinguish between two types of pandemics: one is posed by naturally occurring pathogens; the other is posed by human-engineered pathogens. Engineered pathogens can be further distinguished as being either accidentally or intentionally released.
The likelihood and potential for harm of these different kinds of pathogens vary, so we analyse the pandemics they may each cause below.
The deadliest event in recorded history was likely a natural pandemic: the bubonic plague ravaged Europe and parts of Asia and Africa between 1346 and 1353, killing an estimated 38 million people — roughly 10% of humans alive at the time.1
Natural pandemics pose a small but significant risk of killing a sizable fraction of the world's population, but a much smaller — 100 times as small — risk of killing everyone alive.
In an informal survey of participants at a 2008 conference about global catastrophic risks, the median respondent estimated that by 2100, a natural pandemic had a 60% chance of killing at least 1 million people, a 5% chance of killing at least 1 billion people, and a 0.05% chance of causing human extinction.2
This assessment is broadly in line with what independent lines of evidence suggest: that extinction from a natural pandemic is possible, but very unlikely for a couple of reasons.
But the above argument doesn't take into account that some aspects of modern living conditions are quite different from those prevailing over most of human history. Some parts of modern living put us at greater risk, but others decrease it.
Modern living conditions that increase risk include:
Modern living conditions that decrease risk include:
Taking all this into consideration, it's unclear whether our cumulative risk has increased or decreased. However, even if the risk has in fact increased, it likely hasn't done so to a degree that would put the above assessment in doubt. Therefore, while we still face considerable uncertainties surrounding the estimation of GCBRs from natural pathogens, our conclusion that risk is relatively low is reasonably robust.
Several researchers within the effective altruism community believe engineered pathogens pose a more serious risk to our biosecurity than natural pathogens.6 A sufficiently capable group or government could alter a pathogen's disease-causing properties — such as transmissibility, lethality, incubation time, and environmental survival — to increase its potential damage. By contrast, a naturally evolving organism is constrained by selection pressures to strike a balance between its own fitness and that of its host.
These concerns are exacerbated by trends suggesting that opportunities to cause widespread harm with engineered pathogens are becoming increasingly available, such as:
Such harm may result either from an accidental laboratory release (as a consequence of research on potential pandemic pathogens), or from an intentional release by a hostile group or agent.
In a presentation at a 2011 conference in Malta, Dutch virologist Ron Fouchier described how his team had successfully created a novel, contagious strain of H5N1, the virus subtype responsible for bird flu. Although H5N1 kills about half of the people it infects,7 it's fortunately not transmissible between humans. Fouchier told his audience, however, his team "mutated the hell out of H5N1," and then passed this mutated strain through a series of ferrets (animals often used to model influenza transmission in humans). After 10 ferrets, the virus had acquired the ability to spread from one animal to the other, just like seasonal flu.
Such experiments are seriously worrying, primarily because of the possibility of an accidental release. Fouchier's group worked in a biosafety level 3 (BSL-3) laboratory, the level required for work involving microbes with the potential to cause serious or lethal disease through inhalation. But the track record involving past laboratory escapes indicates a probability of accidental release much higher than would be acceptable on any reasonable cost-benefit analysis — Marc Lipsitch and Thomas Inglesby estimate a chance of accidental release in a BSL-3 facility of 0.2% per laboratory-year. Such a "low" probability can translate into very high expected costs if it risks a pandemic comparable to COVID-19, which has a case fatality rate at least ten times lower than H5N1. Indeed, the authors estimate that each laboratory-year of experimentation on virulent, transmissible influenza virus has an expected death toll of 2,000 to 1.4 million people.
Dangerous pathogens can be and have been studied and modified in the context of military research. The Soviet Union’s Biopreparat programme was the largest, most effective, and most sophisticated offensive biowarfare programme in history. It employed tens of thousands of scientists and technicians in a network of clandestine research institutes and production facilities, and stockpiled and weaponised over a dozen viral and bacterial agents — including smallpox and the causative agents of anthrax, plague, and tularaemia. The programme was associated with a significant number of accidental pathogen escapes, including an aerosolised anthrax that killed over 60 people in a nearby town.8
If a bioweapons programme of comparable scale existed in the future, Lipsitch and Inglesby's estimates would predict an accidental release of weaponised agents within decades. The Biopreparat escapes failed to result in a global pandemic because the Soviet programme focused almost exclusively on non-pandemic agents. This focus was driven by the limitations of 1980s technology rather than self-restraint. With contemporary biotechnology, pursuit of bioweapons by a resourceful state actor should be regarded as one of the most concerning sources of GCBR.
Various radicalised groups and terrorist organisations have expressed their intent to use bioweapons, or dangerous pathogens for destructive purposes. Some have acted on this intention, such as the doomsday cult Aum Shinrikyo, which killed 13 people and injured thousands in 1995 by mounting a sarin gas attack in the Tokyo Metro.9
While the risk of accidental release can be estimated from frequency data from past laboratory escapes, estimating intentional releases would be much more speculative. As Lipsitch and Inglesby note:10
Such a calculation would require reliable, quantitative data on a variety of probability assessments: the probability that a person, group, or country intends to release potential pandemic pathogens (PPP); that a person, group, or country has the means to obtain the pathogen, or has the capacity to generate one from published data; and, that a person, group, or country has the means of distributing a PPP in a way that would start an epidemic. Those kinds of data are not presently available, nor will they be in the foreseeable future.
The survey mentioned earlier found that the median respondent believed an engineered pandemic had a 30% chance of killing at least 1 million people, a 10% chance of killing 1 billion people, and a 2% chance of causing human extinction by 2100.
A sense of the risk posed by the intentional release of engineered pathogens can also be found on Metaculus, a website which aggregates probability estimates for its users on various questions (and so far it has a reasonably good track record.) As of November 2021, Metaculus makes the following forecasts:11
Considered together, these forecasts imply an estimated, unconditional probability of a near-extinction biological event by 2100 of around 0.01%.
We can also estimate the likelihood of intentional deployment of engineered pathogens using statistics about fatalities from terrorism and warfare. In a seminal 1935 paper, Lewis Fry Richardson observed that deadly conflict at all scales — from individual homicides to world wars — can be summarised by a simple model, with timing described by a Poisson process and severity described by a power law.12 Because fatalities fit this regular pattern, Richardson's model can be used to estimate the probability of events of unprecedented severity.
One study has done just that, and estimates a risk of human extinction from biological terrorism of 0.014% per century, and a corresponding risk from biological warfare of 0.005% per century.13 However, this method probably underestimates risk for a number of reasons, such as the conservative assumptions made by the authors and the increasing risk expected from the trends noted earlier.14
Superficially, biosecurity as typically defined does not appear to be a particularly neglected cause area. Even before the COVID-19 pandemic, the United States federal government allocated around $3 billion USD annually to biosecurity. As Gregory Lewis notes, this figure stands in remarkable contrast to the approximately $10 million spent on AI safety back in 2017.
However, this picture is somewhat misleading for several reasons:
The bottom line is that biosecurity, and especially GCBRs, receive far less funding than it would if humanity were adequately prioritising its own safety. The cause is therefore neglected.
The tractability of biosecurity seems comparable to our other high-priority causes working to safeguard the long-term future, such as AI safety and nuclear security. 80,000 Hours rates biosecurity as "moderately tractable," and experts characterise the reduction of GCBRs as "potentially tractable."17
We face two important obstacles to significant progress in biosecurity.
The first is that biosecurity is often dual-use — that is, it can be used to do good as well as to cause harm. Research or development with the potential to cure disease or extend life can frequently also be put to the service of malicious or destructive purposes. This isn't the case with other types of risks. Nuclear programmes, for instance, require facilities for uranium enrichment with no other legitimate use.18
The second is that public discussion of research in biotechnology can often constitute an information hazard — that is, a risk arising from the spread of true information.19 This is for a couple of reasons:
There are several promising interventions to reduce GCBRs. Open Philanthropy wrote a comprehensive report on GCBRs, which largely focused on viral pathogens. (Although viruses are not the only organisms capable of posing a GCBR, they are especially worrying because of their transmissibility and virulence and because we have limited treatments against them.) The report finds several promising goals, which if we met, would make us safer:
Of these, metagenomic sequencing is an especially promising long-term approach to GCBR management. Carl Shulman has argued that implementing continuous and ubiquitous surveillance for new pathogens sufficient to virtually eliminate GCBRs should be affordable to most governments within a few decades, based on trends in the falling costs of DNA sequencing noted earlier.
Other broader interventions to reduce GCBRs have been proposed, including:
It is unclear whether supporting biosecurity is currently the most promising option for safeguarding humanity's future. You may be one of many who think that AI poses a higher risk of existential catastrophe.21 As noted earlier, AI safety receives far less funding than biosecurity overall, though it's unclear whether there's much difference between the two if we compare between funding explicitly focused on reducing existential risks.
The case for prioritising biosecurity involves a great deal of speculation and subjective judgement on a number of key questions, including:
If you prefer a higher level of certainty that your support is having a positive impact, you may want to support other causes, with better-understood solutions.
Because biosecurity is a fertile ground for information hazards, discussion in this field will inevitably be less transparent than other areas, and charity evaluators may exhibit less reasoning transparency than some donors may prefer.
You can donate to several promising programs working in this area via our donation platform. For our charity and fund recommendations, see our best charities page.
To learn more about biosecurity, we recommend the following resources.
This page was written by Pablo Stafforini. You can read our research notes to learn more about the work that went into this page.
Please help us improve our work — let us know what you thought of this page and suggest improvements using our content feedback form.