top of page

Prioritising Existential Risk

By Jack Walker - Philosophy Student @ Churchill College, Cambridge

 

Existential risks are events which could irreparably damage the welfare of humanity on a global scale. This would be either through total human extinction or a permanent stagnation of life, falling short of our potential. We can distinguish between risks which are anthropogenic (human caused) vs non-anthropogenic (non-human caused). Examples of the former include nuclear warfare, malignant AI and ecological collapse due to climate change, whereas the latter covers volcanoes, asteroids and natural pandemics. Despite the massive stakes involved, preventing existential risk has only become a major topic of study in the past twenty years and is still largely ignored when it comes to policy.


Why should we be concerned about reducing existential risk? Consider this thought experiment from philosopher Derek Parfit:


Compare three scenarios:

  1. Peace

  2. Nuclear war kills 99% of the global population

  3. Nuclear war kills 100% of the global population

Obviously, 1 is better than 2 and 2 is better than 3. However, Parfit asks us to consider where the greater difference lies: between 1 and 2 or between 2 and 3. Contrary to what most people believe, Parfit argues that the difference between 2 and 3 is many times greater than between 1 and 2. This is because, by destroying mankind, 3 closes off the existence of all possible future people. This is significant because this represents many millions of times more lives that could never come into existence than would be ended, even if 99% of the global population were killed.


If we buy Parfit’s argument, why then have politicians failed to take action to reduce risk? One reason is that our cognitive biases make it difficult for us to approach these kinds of issues objectively. Since we find huge risks hard to conceptualise, it is difficult to come up with a proper response. Toby Ord makes the point nicely when he says, “we tend to treat nuclear war as an utter disaster, so we fail to distinguish nuclear wars between nations with a handful of nuclear weapons (in which millions would die) from a nuclear confrontation with thousands of nuclear weapons (in which a thousand times as many people would die, and our entire future may be destroyed)”.


Another reason is surely that there is a lack of general political will to spend current resources improving the prospects of people who don’t yet exist. For example, if we can help 10 people now or 1000 people in 100 years, it will surely be difficult to justify to the 10, why they should be denied aid. Since future people can’t represent their own interests, it is left for us to do it for them. Granted, some countries already have policies in place which deal with the welfare of future people. For example, Kuwait and Nigeria both have large investment funds set up to support the future population when profits from fossil fuels eventually dry up. However, most of these policies only concern sustainable development rather than actually reducing existential risk. As such, there are still huge problems, which present democratic systems might be poorly positioned to address.


Whilst the prospects of meaningful action to reduce risk still seem bleak, progress is being made. The study of existential risk is a hugely exciting field, combining science, politics, and philosophy in an attempt to make a real difference.


Further reading:

  1. 'The Precipice' by Toby Ord

  2. 'The Alignment Problem' by Brian Christian

  3. 'Existential Risk Prevention as Global Priority' by Nick Bostrom

Comentarios


bottom of page