RSS

Vuln­er­a­ble World Hypothesis

TagLast edit: 6 Dec 2022 22:56 UTC by Jordan Arel

The vulnerable world hypothesis (VWH) is the view that there exists some level of technology at which civilization almost certainly gets destroyed unless extraordinary preventive measures are undertaken. VWH was introduced by Nick Bostrom in 2019.[1]

Historical precedents

Versions of VWH have been suggested prior to Bostrom’s statement of it, though not defined precisely or analyzed rigorously. An early expression is arguably found in a 1945 address by Bertrand Russell to the House of Lords concerning the detonation of atomic bombs in Hiroshima and Nagasaki and its implications for the future of humanity.[2] (Russell frames his concerns specifically about nuclear warfare, but as Toby Ord has argued,[3] this is how early discussions about existential risk were presented, because at the time nuclear power was the only known technology with the potential to cause an existential catastrophe.)

All that must take place if our scientific civilization goes on, if it does not bring itself to destruction; all that is bound to happen. We do not want to look at this thing simply from the point of view of the next few years; we want to look at it from the point of view of the future of mankind. The question is a simple one: Is it possible for a scientific society to continue to exist, or must such a society inevitably bring itself to destruction? It is a simple question but a very vital one. I do not think it is possible to exaggerate the gravity of the possibilities of evil that lie in the utilization of atomic energy. As I go about the streets and see St. Paul’s, the British Museum, the Houses of Parliament and the other monuments of our civilization, in my mind’s eye I see a nightmare vision of those buildings as heaps of rubble with corpses all round them. That is a thing we have got to face, not only in our own country and cities, but throughout the civilized world as a real probability unless the world will agree to find a way of abolishing war. It is not enough to make war rare; great and serious war has got to be abolished, because otherwise these things will happen.

Further reading

Bostrom, Nick (2019) The vulnerable world hypothesis, Global Policy, vol. 10, pp. 455–476.

Bostrom, Nick & Matthew van der Merwe (2021) How vulnerable is the world?, Aeon, February 12.

Christiano, Paul (2016) Handling destructive technology, AI Alignment, November 14.

Hanson, Robin (2018) Vulnerable world hypothesis, Overcoming Bias, November 16.

Huemer, Michael (2020) The case for tyranny, Fake Nous, July 11.

Karpathy, Andrej (2016) Review of The Making of the Atomic Bomb, Goodreads, December 13.

Manheim, David (2020) The fragile world hypothesis: complexity, fragility, and systemic existential risk, Futures, vol. 122, pp. 1–8.

Piper, Kelsey (2018) How technological progress is making it likelier than ever that humans will destroy ourselves, Vox, November 19.

Rozendal, Siebe (2020) The problem of collective ruin, Siebe Rozendal’s Blog, August 22.

Sagan, Carl (1994) Pale Blue Dot: A Vision of the Human Future in Space, New York: Random House.

  1. ^

    Bostrom, Nick (2019) The vulnerable world hypothesis, Global Policy, vol. 10, pp. 455–476.

  2. ^

    Russell, Bertrand (1945) The international situation, The Parliamentary Debates (Hansard), vol. 138, pp. 87–93, p. 89.

  3. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, ch. 2.

Open-source LLMs may prove Bostrom’s vuln­er­a­ble world hypothesis

Roope Ahvenharju15 Apr 2023 19:16 UTC
1 point
1 comment1 min readLW link

Open Agency model can solve the AI reg­u­la­tion dilemma

Roman Leventov8 Nov 2023 20:00 UTC
22 points
1 comment2 min readLW link

[Question] Why not con­strain wet­labs in­stead of AI?

Lone Pine21 Mar 2023 18:02 UTC
15 points
10 comments1 min readLW link

The Frag­ility of Life Hy­poth­e­sis and the Evolu­tion of Cooperation

KristianRonn4 Sep 2024 21:04 UTC
50 points
6 comments11 min readLW link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan Arel6 Dec 2022 22:35 UTC
4 points
2 comments3 min readLW link

The ne­ces­sity of “Guardian AI” and two con­di­tions for its achievement

Proica26 May 2024 17:39 UTC
−2 points
0 comments15 min readLW link

The Jour­nal of Danger­ous Ideas

rogersbacon3 Feb 2024 15:40 UTC
−25 points
4 comments5 min readLW link
(www.secretorum.life)

The Buck­ling World Hy­poth­e­sis—Vi­su­al­is­ing Vuln­er­a­ble Worlds

Rosco-Hunter4 Apr 2024 15:51 UTC
−5 points
2 comments4 min readLW link
No comments.