Friday 19 February 2016

Major Types of Global Risks

Summary: In terms of which global risks we should currently be directing resources to research and mitigate, I've reached the same conclusion as the Future of Life Institute and the perspective laid out in 'Global Catastrophic Risks' (2008). That is, risks from emerging or existing technologies which could critically threaten the stability of human civilization are the ones to prioritize. In this I include most risks which include a anthropogenic factor of component, such as climate change. I describe what I currently perceive as the most plausible mechanism for catastrophic harm for each of these type of risks. I give treatment to other risks, and conclude systemic (e.g., macroeconomic, socio-political) instability and nanotechnology are both types of risks which themselves don't currently pose a global risks, but, for different reasons, each ought to remain on the radar of an integrated assessment of global catastrophic risks. 

Since I've started dedicating serious time and thought to thinking about global catastrophic risks (GCRs), I've assumed there are four types of risks humanity faces and should dedicate the bulk of its risk-mitigation efforts to. However, I realized just because this is my perspective doesn't mean it's everybody's perspective. So, I should check my assumptions. They are below. This is also an invitation to voice disagreement, or suggest other risks I should take more seriously.

Nuclear Weapons

I'm assuming the inclusion of nuclear war is so obvious it doesn't warrant further explanation. For the record, aside from nuclear war, I include situations in which one or more nuclear weapons are detonated. Here is an excerpt from Global Catastrophic Risks (2008) detailing nuclear risks which don't begin as war[1].
  • Dispersal of radioactive material by conventional explosives ('dirty bomb')
  •  Sabotage of nuclear facilities 
  •  Acquisition of fissile material leading to the fabrication and detonation of a crude nuclear bomb ('improvised nuclear device') 
  •  Acquisition and detonation of an intact nuclear weapon
  •  The use of some means to trick a nuclear state into launching a nuclear strike.
(Anthropogenic) Environmental and Climate Change

 The tail risks of climate change could pose a global catastrophe. However, there seems other potential GCRs as a result of environmental change caused by human activity, but aren't also the result of increased atmospheric concentrations of CO2 and other greenhouse gases. Such risks possibly include peak phosphorus, soil erosion, widespread crop failure, scarcity of drinkable water, pollinator decline, and other threats to global food security not related to climate change. There are also potential ecological crises, such as a critical lack of biodiversity. Whether biodiversity or wild life are intrinsically valuable, and whether humanity ought to care about the welfare and/or continued existence of species other than itself are normative questions which are orthogonal to my current goals in thinking about GCRs. However, it's possible the mass extinction of other species will harm ecosystems in a way which proves catastrophic to humanity regardless of how much we care about things other than our own well-being. So, it's worth paying some attention to such environmental risks regardless.

When we talk about climate change, typically we're thinking about anthropogenic climate change, i.e., climate change influenced or induced by human action. However, there are a variety of other GCRs, such as nuclear fallout, asteroid strikes, supervolcanoes, and extreme radiation exposure, which would result in a sort of "naturally" extreme climate change. Additionally, these GCRs, alongside systemic risks and social upheaval, could disturb agriculture. Therefore, it seems prudent to ensure the world have a variety of contingency plans for long-term food and agricultural security, even if we don't rate anthropogenic climate change as a very pressing GCR.


Biosecurity Risks 

When I write "biosecurity", I mostly have in mind either natural or engineered epidemics and pandemics. If you didn't know, a pandemic is an epidemic of worldwide proportions. Anyway, while humanity in the past has endured many epidemics, with how globally interconnected civilization is in the modern era, there is more of a risk than ever before of epidemics spreading worldwide. Also, other changes in the twenty-first century seem like they greatly increase the risk of major epidemics, such as the rise of antibiotic resistance among infectious pathogens. However, there is a more dire threat: engineered pandemics. As biotechnology becomes both increasingly powerful and available over time, there will be more opportunity to edit pathogens so they spread more readily, cause higher mortality rates, or are less susceptible to medical intervention. This could be the result of germ warfare or bioterrorism. Note a distinct possibility that what an offending party intends as only a limited epidemic may unintentionally metastasize into a global pandemic. Scientists may also produce a potentially catastrophic pathogen which is then either released by accident, or stolen and released into the environment by terrorists.

Other potential biosecurity risks include the use of biotechnology or genetic modification that threatens global food security, or is somehow able to precipitate an ecological crisis. As far as I know, less thought has been put into these biosecurity risks, but the consensus assessment also seems to be they're less threatening than the risk of a natural or engineered pandemic.

In recent months, the world has become aware of the potential of 'gene drives'. At this point, I won't comment on gene drives at length. Suffice to say I consider them a game-changer for all considerations of biosecurity risk assessment and mitigation, and I intend to write at least one full post with my thoughts on them in the near future.

Artificial Intelligence Risks

2015 is the year awareness of safety and security risks from Artificial Intelligence (AI) went "mainstream". The basic idea is smarter-than-human AI, also referred to as Artificial General Intelligence (AGI), or machine/artificial superintelligence (MSI, ASI, or just "superintelligence") could be so unpredictable and powerful once it's complete humanity wouldn't be able to stop it. If a machine or computer program could not only outsmart humanity but think several orders of magnitude faster than humanity, it could quickly come to control civilizational resources in ways putting humanity at risk. The difference in intelligence between you and an AGI as feared isn't the same as difference between you and Albert Einstein. When concern over AI safety is touted, it's usually in the vein of a machine smarter than humanity to the degree you're smarter than an ant. Likewise, the fear is, then, AGI might be so alien and unlike humanity in its thinking that by default it might think of extinguishing humanity not as any sort of moral problem, but a nuisance at the same level of concern you give to an ant you don't even notice stepping on when you walk down the street. Technology which AGI might use to extinguish humanity is various types of robotics, or gaining control of other dangerous technologies mentioned, nuclear weapons or biotechnology.

While there are plenty of opinions on when AGI will arrive, and what if any threats to humanity it will pose, concern for certain sorts of AI risks is warranted even if you don't believe risks from machines generally smarter than humans are something to worry about in the present. "Narrow AI" is AI which excels in one specific, but not all domains. Thus, while narrow AI doesn't pose danger on its own, or does anything close to what humans would call thinking for itself, computer programs using various types of AI are tools which could either be weaponized, or which could accidentally cause a catastrophe, much like nuclear technology today. Artificial General Intelligence isn't necessary for the development of autonomous weapons, such as drones which rain missiles from the sky to selectively kill millions, or to justify the fear of potential AI arms races. Indeed, an AI arms race, much like the nuclear arms race during the Cold War, might be the very thing which ends up pushing AI to the point of general intelligence, which humanity might then lose control of. Thus, preventing an AI arms race could be doubly important. Other near-term (i.e., in the next couple decades) development in AI might pose risks even if they're not intended to be weaponized. For example, whether its through the rise of robotics putting the majority of human employment at risk, or losing control of the computers behind algorithmic stock trading, human action may no longer be the primary factor determining the course of the global economy humanity has created.

These are the types of risks I consider the most imminent threat to humanity, and which should receive the vast bulk of attention and resources dedicated to mitigating GCRs in general. This appears to be the assessment shared by the Future of Life Institute. If it's not clear, everything I mention in this list isn't so much a single risk so much of a family of risks I've grouped together based on the assumptions they have similar causes, and because similar strategies could be pursued to mitigate each of the risks in the same family. Below are some other GCRs which are on my radar, but which I believe are too insignificant to warrant as much concern at this point in time, or which I haven't learned enough about yet and currently don't think of as a major concern.

  • Asteroid Impact. Another GCR which like AI and nuclear war is commonly recognized in pop culture is the potential of an asteroid impact on Earth. The first thing to do about potential asteroid risks is to know when asteroids might be coming. Scientists around the world already have a network of telescopes and surveillance technology to forewarn humanity against asteroid strikes. Scientists are also confident they're able to predict potential asteroids passing Earth years in advance, and from what I've read, don't perceive any potentially catastrophic asteroid impacts in the near future. If for whatever reason one we currently don't know about was headed towards Earth, it seems we would still learn about this space object multiple years before it hit Earth, and could act accordingly. This is all we can do for now, although I'm aware there exist would-be plans for dealing with incoming meteors, such as using a missile to alter their trajectory and deflect them away from Earth. Humanity might actually be more prepared to deal with an asteroid impact than we are prepared to deal with most other GCRs. This isn't to say humanity is well-prepared for an asteroid impact so much as we're woefully under-prepared to deal with terrestrial threats. Either way, asteroid impacts don't appear to be a current priority.
  • Globally Catastrophic Natural Disasters. This is my awkward name for the type of risk arising from natural disasters, such as firestorms or supervolcanoes. Notable here is the Toba supervolcano eruption, if for no other reason it was the event which brought humanity closer to extinction than any other in recorded (pre-)history, believed to have reduced the worldwide population to 1,000 or less humans. This is another risk I admittedly don't know much about. If I'm correct, nobody is rating this sort risk and all too likely, although we could stand to learn more about the prediction of volcanic activity. I believe the biggest threats due to this type of risks are to climate stability and food security, topics I covered above in my section on anthropogenic climate change.
  • Other Extraterrestrial Risks. This includes the potential of extra-terrestrial life to eradicate Earth-originating life, and cosmic phenomena such as catastrophic radiation or black holes destroying the Earth. Such phenomena seem very unlikely, but if they were to occur, would be so catastrophic they would dwarf the power of the other GCRs mentioned above. Honestly, if a black hole or radiation more powerful than what the Sun can throw at us were to arrive at Earth tomorrow, I think we'd be dead. Somebody could argue that since they're potential is so devastating humanity ought to dedicate resources to mitigating these risks even if their chances of occurring in the medium-term future are small. I would argue since the ability to mitigate these risks is so beyond the level of technology currently available to humanity, or even what our current scientific theories can produce, the best way to mitigate this risks is to ensure the stable technological development into the long-term future, which is in turn ensured by investing in safe science and mitigating the other GCRs mentioned above.

    I should state I don't include in this grouping risks which originate off of Earth but from within our solar system, such as geomagnetic storms or solar radiation which strike me as more akin to global natural disasters such as imminent climate change risks or supervolcanoes in how they would effect human civilization, and how we might deal with them.
  • Social and Political Instability. This includes a variety of sociopolitical phenomena which may not actually have much in common with each other from the perspective of the social sciences themselves, but which I'm grouping together because they're so different than every other type of major risk. Hence, it's not well-defined. Failures of governments to coordinate in real-life Prisoner's Dilemma scenarios seems like the biggest problem. This has already been the cause of existing anthropogenic GCRs, such as the ever-escalating nuclear arms race of the Cold War, and the countries of the world following their economic incentives to do nothing about climate change rather than coordinating to create and enforce environmental regulations to curb pollution and CO2 output. This is a problem which could contribute to future technological arms races, such as an AI arms race, or in a fashion of countries pursuing their economic incentives by each increasing the development of dangerous emerging technologies rather than reaching agreements to curb that development. Considering in the existing cases of the nuclear arms race and climate change, it took half a century after realizing the factor of coordination failure for a given GCR before humanity got its act together and actually did something to stably decrease the chance of the risk over time, I'm not confident humanity is competent at solving the "cooperate-don't-defect" problem when it arises. This might be the most important lesson we have to learn from the twentieth century, and we haven't mastered it yet. This is a huge problem.

    There are other potential risks for political and social instability such as terrorism, rogue governments, warfare, or local or global totalitarian regimes. Risk factors for increasing social instability would include the increase of popularity of radical religious or political ideologies.
  • Nanotechnology. Risks from molecular nanotechnology (MNT) seem to me to be the most speculative. Even more thank AI, for which we have some technologies leading to what we can imagine will one day lead to superintelligence, it seems there is little to no scientific consensus on if or when MNT will become possible to engineer. Therefore, I defer to other sources in this case. From Global Catastrophic Risks[2]:

    "They [contributors Chris Phoenix and Mike Treder] distinguish between 'nanoscale technologies', of which many exist today and many more are in development,  and 'molecular manufacturing', which remains a hypothetical future technology (often associated with the person who first envisaged it in detail, K. Eric Drexler). Nanoscale technologies, they argue, appear to pose no new global catastrophic risks, although such technologies could in some cases either augment or help mitigate some of the other risks considered in this volume. Phoenix and Treder consequently devote the bulk of their chapter to considering the capabilities and threats from molecular manufacturing. As with superintelligence, the present [emphasis theirs] risk is virtually zero since the technology in question does not yet exist; yet the future risk could be extremely severe."

    And,

    "Phoneix and Treder review a number of global catastrophic risks that could arise with such an advanced manufacturing technology, including war, social and economic disruption, destructive forms of global governance, radical intelligence enhancement, environmental degradation, and 'ecophagy' (small nanobots replicating uncontrollably in the natural environment, consuming or destroying the Earth's biosphere)."

    Messrs. Phoenix and Treder seem to neglect to mention nanotechnology could augment or amplify both the positive and negative potentials of near-future biotech. I don't know why this is. As far as I can tell, nanoscale biotechnology could thus add a new GCR in the field of biosecurity that's yet unrealized, and may eventually become just as threatening as engineered or natural pandemics.

    Other treatments of nanotechnology and how it relates to risk I've found informative are "Why Eric Drexler's critics are right" by Topher Hallquist, and especially the shallow review from Givewell on risks from atomically precise manufacturing, as part of their collaboration with Good Ventures, the Open Philanthropy Project. From same:
"Atomically precise manufacturing is a proposed technology for assembling macroscopic objects defined by data files by using very small parts to build the objects with atomic precision using earth-abundant materials. There is little consensus about its feasibility, how to develop it, or when, if ever, it might be developed. This page focuses primarily on potential risks from atomically precise manufacturing. We may separately examine its potential benefits and development pathways in more detail in the future."
I don't think I can do better on this subject than Givewell can. However, recent developments in both biotechnology and AI have made me wary of writing off risks from emerging technology as something saved for decades flung into the future. So, I think it's worth paying attention to potential developments in nanotech, even if we don't direct much money or time at the field right now. Additionally, in terms of the impact of GCRs and emerging technology over the course of the twenty-first century, I believe the idea of integrated assessment is underrated, and it seems molecular nanotechnology or atomically precise manufacturing could become the most crucial sort of technology later this century.

[1] Nick Bostrom and Milan M. Cirkovic, "Introduction", in Global Catastrophic Risks, ed. Nick Bostrom and Milan M. Cirkovic. (New York: Oxford University Press, 2008), 22.

[2] Bostrom and Cirkovic, "Introduction", 24-25.

5 comments:

  1. Although I'm nervous about work against non-AI catastrophic risks because doing so makes space colonization more likely, from a purely factual standpoint, I think your assessment of relative risks is pretty much accurate. In my opinion, AI risks dominate all others combined in the long run, but in the next few decades, the other risks are probably more significant (except for weak-AI robotic warfare and such).

    ReplyDelete
    Replies
    1. Honestly, I haven't read enough of your work or that of others to know what to think of the possible (dis)value of space colonization. I will continue to write about GCRs, and likely, how to mitigate them. However, I consider you a valuable peer in the study of subjects related to effective altruism, so I'm just as likely to read much more of your work and generate more work because of it. It's too early to tell how my future essays will take into account your work.

      Delete
    2. Sounds good. :) I think most disagreements about the (dis)value of the far future among EAs come down to values rather than facts. I think the expected value (if not necessarily the median or mode) of the far future is positive relative to, say, Toby Ord's values, which place high weight on happiness and are ok with allowing vast increases in suffering by some in order to create even more pleasure by others.

      Delete
    3. Yeah, based on our interactions in the past I feel like my values, at least as I currently perceive them, are more in line with yours than they are with others. I don't read enough philosophy to understand what it would mean to identify as a utilitarian, in the same way I don't have a sense of what it would mean for me to identify as a third- or fourth-wave feminist because there isn't enough of a consensus on what that means for me to feel confident on making major updates on the basis of social evidence. I view the categories of people these labels are meant to describe as socially constructed. I've been told this perspective is something like 'non-cognitivism'.

      Anyway, for a while now I've been trying to intuit where and how some of my values aren't yet commensurate with yours. After reading 'On Values Spreading' by Michael Dickens (http://effective-altruism.com/ea/ne/on_values_spreading/), I get the sense my values are close to his, and closer to his than yours. However, this leaves still much room for future cooperation between us. It depends on how much pleasure would be produced by generating more suffering, but those values as expressed by Toby Ord shock me. Like, unless I was faced with the choice to create a utility monster who would experience pleasure several orders of magnitude greater than the suffering which, in some fashion, was necessary to generate to create the utility monster, I am against creating more suffering just to create an even 'greater' amount of pleasure. If it turns out suffering and happiness are commensurable in that sense, the idea of biting the bullet on creating suffering just to create a slightly greater amount of happiness in some other moral patient terrifies me. So, if I had to call myself some sort of utilitarian right now, I would call myself still a classical utilitarian, but more negative- than positive-leaning.

      Delete
    4. Thanks for the comments!

      I didn't mean to put words in Toby's mouth, since he very likely would not phrase his views in the way that I did. However, "allowing vast increases in suffering by some in order to create even more pleasure by others" seems to be consistent with Toby's classical utilitarianism and is the implicit tradeoff that's made by efforts to make space colonization more likely (unless someone believes that space colonization will reduce more suffering than it prevents, a view to which I assign ~31% probability.

      Relative to the assessments of the median classical utilitarian, I would put the ratio of (expected pleasure) to (expected suffering) in the future as maybe ~5 to 1, although this estimate can change a lot with new arguments. The more unknown unknowns dominate, and the more one expects space colonization to be carried out by uncontrolled AIs (which plausibly benefit from non-AI x-risk work), the closer the ratio is pushed toward 1 to 1.

      Suppose the far future were equivalent to the current human population of Sweden in terms of its distribution of happiness vs. suffering. Lots of people would have normal, satisfied lives, though some would have lifetime depression, painful and fatal childhood diseases, and a few would be tortured by psychopaths. Would you think it's good or bad to create more Swedens (ignoring impacts on animals, etc.)?

      Delete