No, I don’t see a Keanu Reeves ‘The Day the Earth Stood Still’ moment. If we are in a weak zoo/laboratory, however, I do not expect that we will be permitted to develop an ASI, or if we do develop an ASI, it will not actually be ‘ours’, i.e., it will be taken over by/merged with ‘their’ superintelligence. Then, I would hope for formal Contact in some form, as our development of an ASI may establish Contact, if they value honesty. After Contact, I expect the zoo to become a containment area first. It really depends how badly we have screwed up the planet, in terms of carrying capacity issues, like biodiversity and global warming.
Re tourists, hey, that would account for the sightings of UFOs. But, poachers, no. That would be RUDE—we would eventually find out and be angry, risking interstellar war. So, definitely not.
BTW, if you are Alexey Turchin, I tried to reach you on effective altruism. Would you take a look at this:
Draper, J. (2020, April 15). Optimising Peace through a Universal Global Peace Treaty to Constrain Risk of War from a Militarised Artificial Superintelligence. https://doi.org/10.31235/osf.io/4268q
Thanks, yes, it is me. I created a new username “avturchin” at some moment, so PMs to older “turchin” may be not available for me.
I added a link to this your post about Laboratory Zoo to my new draft about “UAP and global catastrophic risks”. I could share the draft.
I am reading now your article and here are some comments:
“Burning plasma” is an unclear term. It could either mean unlimited nuclear electric energy (e.g. ITER) – or more ominous, but less probable “cold fusion thermonuclear bombs” or thermonuclear bombs without fission, which could be mass-produced in secrecy or by small actors.
One of the main arguments for your point of view, that is, the badness of wars for AGI safety, is that even a limited AGI coupled with almost unlimited capabilities of a rich nuclear power state gets a strategic decisive advantage over other countries, but it has to execute it via war. The same limited AGI in a basement will be almost useless, as it hasn’t resource to leverage.
‘Burning plasma’ here is the standard hot fusion self-sustaining alpha-powered reaction, as in the sun, or, yes, ITER. I have a forthcoming article on this coming out in the engineering journal IEEE TEM. It mentions the peace-building opportunity the announcement of a burning plasma would offer humanity. A link to the copyedited version is here: https://drive.google.com/open?id=1uRgSaUyOOZ94DxFL0Bza6Lt5YMOdh8i4 . As you can see from the bios, I am collaborating with serious people on this, like a US George Washington University professor in innovation economics and an individual with a US DoD background in relevant disciplines. My main relevant affiliation is with the Center for Global Nonkilling (https://en.wikipedia.org/wiki/Center_for_Global_Nonkilling). I can provide certified credentials.
I am very interested in revisiting the failed 1946 Baruch Plan, which could have deterred the Cold War, via the ‘burning plasma’ opportunity, especially to prevent the risk of US-Russian-Chinese ASI-enabled conflict. My main concern is therefore the ‘AI-state’, as I think you defined it in your 2018/2020 article with Denkenberger. The US, Russia, and China are more than competent enough to track all ‘basement’ or corporate AGI development projects and to take them over, becoming AI-states. So the problem converges to the militarized AI-state, as you point out in your 2018 book chapter.
I view the risk of ASI-enabled warfare to maintain or establish global domination as extremely serious, with a high likelihood of probability by c. 2050. The ASI I am referring to would be young and probably horribly compromised in terms of its value system by political subversion.
From my perspective, it would be extremely rewarding to collaborate with a Russian ASI specialist on this draft article. Would you be interested in co-authoring? Once we had finalized something, I would invite in Bill Bhaneja, a former Canadian nuclear disarmament diplomat now on the Board of the Center for Global Nonkilling, as a third author.
Thanks for your UAP paper. BTW, on your paper, I would definitely mention the Laboratory Hypothesis, which was indeed a variant on Ball’s original Zoo Hypothesis. Also, note there are different kinds of zoo, i.e., ones varying from a collection of cages to safari parks, and of course, zoo ‘keepers’ are subject to functionally differentiated roles, from receptionists to veterinarians, and tourists are differentiated, from specialists to school parties, from different interstellar civilizations—and not all civilizations follow the rules to the same extent. The enormous variation between staff and visitors explains the diversity in legitimate UAP activity. Note also due to the case of a small number of both existing interstellar civilizations and emergent stellar civilizations, the number of Contact situations is low, so zoo policy for Earth (and other planets) is an ongoing issue. Note zoo protocols change over time, as does tourist behaviour, as we have seen in the last century on Earth. The Star Trek ‘Prime Directive’, while it exists for Earth on the basis of autonomous development combined with Life on Earth answering the meaning of the universe, is therefore weak and somewhat variable.
No, I don’t see a Keanu Reeves ‘The Day the Earth Stood Still’ moment. If we are in a weak zoo/laboratory, however, I do not expect that we will be permitted to develop an ASI, or if we do develop an ASI, it will not actually be ‘ours’, i.e., it will be taken over by/merged with ‘their’ superintelligence. Then, I would hope for formal Contact in some form, as our development of an ASI may establish Contact, if they value honesty. After Contact, I expect the zoo to become a containment area first. It really depends how badly we have screwed up the planet, in terms of carrying capacity issues, like biodiversity and global warming.
Re tourists, hey, that would account for the sightings of UFOs. But, poachers, no. That would be RUDE—we would eventually find out and be angry, risking interstellar war. So, definitely not.
BTW, if you are Alexey Turchin, I tried to reach you on effective altruism. Would you take a look at this:
Draper, J. (2020, April 15). Optimising Peace through a Universal Global Peace Treaty to Constrain Risk of War from a Militarised Artificial Superintelligence. https://doi.org/10.31235/osf.io/4268q
?
I based the ASI section on your work.
Thanks, yes, it is me. I created a new username “avturchin” at some moment, so PMs to older “turchin” may be not available for me.
I added a link to this your post about Laboratory Zoo to my new draft about “UAP and global catastrophic risks”. I could share the draft.
I am reading now your article and here are some comments:
“Burning plasma” is an unclear term. It could either mean unlimited nuclear electric energy (e.g. ITER) – or more ominous, but less probable “cold fusion thermonuclear bombs” or thermonuclear bombs without fission, which could be mass-produced in secrecy or by small actors.
One of the main arguments for your point of view, that is, the badness of wars for AGI safety, is that even a limited AGI coupled with almost unlimited capabilities of a rich nuclear power state gets a strategic decisive advantage over other countries, but it has to execute it via war. The same limited AGI in a basement will be almost useless, as it hasn’t resource to leverage.
Dear Alexey,
Yes, I would be interested in your draft. My email is johndr@kku.ac.th . I am am a British academic, a Visiting Professor at the University of Nottingham right now; here is my ORCID: https://orcid.org/0000-0002-3626-533X .
‘Burning plasma’ here is the standard hot fusion self-sustaining alpha-powered reaction, as in the sun, or, yes, ITER. I have a forthcoming article on this coming out in the engineering journal IEEE TEM. It mentions the peace-building opportunity the announcement of a burning plasma would offer humanity. A link to the copyedited version is here: https://drive.google.com/open?id=1uRgSaUyOOZ94DxFL0Bza6Lt5YMOdh8i4 . As you can see from the bios, I am collaborating with serious people on this, like a US George Washington University professor in innovation economics and an individual with a US DoD background in relevant disciplines. My main relevant affiliation is with the Center for Global Nonkilling (https://en.wikipedia.org/wiki/Center_for_Global_Nonkilling). I can provide certified credentials.
As you may note from this:
Draper, John & Bhaneja, Bill, 2019. “Fusion Energy for Peace Building—A Trinity Test-Level Critical Juncture,” SocArXiv mrzua, Center for Open Science.
I am very interested in revisiting the failed 1946 Baruch Plan, which could have deterred the Cold War, via the ‘burning plasma’ opportunity, especially to prevent the risk of US-Russian-Chinese ASI-enabled conflict. My main concern is therefore the ‘AI-state’, as I think you defined it in your 2018/2020 article with Denkenberger. The US, Russia, and China are more than competent enough to track all ‘basement’ or corporate AGI development projects and to take them over, becoming AI-states. So the problem converges to the militarized AI-state, as you point out in your 2018 book chapter.
I view the risk of ASI-enabled warfare to maintain or establish global domination as extremely serious, with a high likelihood of probability by c. 2050. The ASI I am referring to would be young and probably horribly compromised in terms of its value system by political subversion.
From my perspective, it would be extremely rewarding to collaborate with a Russian ASI specialist on this draft article. Would you be interested in co-authoring? Once we had finalized something, I would invite in Bill Bhaneja, a former Canadian nuclear disarmament diplomat now on the Board of the Center for Global Nonkilling, as a third author.
Alexei,
Thanks for your UAP paper. BTW, on your paper, I would definitely mention the Laboratory Hypothesis, which was indeed a variant on Ball’s original Zoo Hypothesis. Also, note there are different kinds of zoo, i.e., ones varying from a collection of cages to safari parks, and of course, zoo ‘keepers’ are subject to functionally differentiated roles, from receptionists to veterinarians, and tourists are differentiated, from specialists to school parties, from different interstellar civilizations—and not all civilizations follow the rules to the same extent. The enormous variation between staff and visitors explains the diversity in legitimate UAP activity. Note also due to the case of a small number of both existing interstellar civilizations and emergent stellar civilizations, the number of Contact situations is low, so zoo policy for Earth (and other planets) is an ongoing issue. Note zoo protocols change over time, as does tourist behaviour, as we have seen in the last century on Earth. The Star Trek ‘Prime Directive’, while it exists for Earth on the basis of autonomous development combined with Life on Earth answering the meaning of the universe, is therefore weak and somewhat variable.