I’m not saying its inevitable, but it’s a failure of imagination if you can’t think of any way the future can go horribly wrong like that.
My biggest concern is an AI or civilization that decides to create a real hell to punish people for their sins. Humans have pretty strong feelings towards wanting to punish those who did wrong, and our morality and views on punishment are constantly changing.
E.g. if slaveholder a were alive today, some people may want to see them tortured. In the future perhaps they will want to punish, hypothetically, meat eaters or people who weren’t as altruistic as possible, or something we can’t even conceive of.
Yeah, there are plenty of examples of dictators that go through great lengths to inflict tremendous amounts of pain on many people. It’s terrifying to think of someone like that in control of an AGI.
Granted, people like that probably tend to be less likely than the average head-of-state to find themselves in control of an AGI, since brutal dictators often have unhealthy economies, and are therefore unlikely to win an AGI race. But it’s not like they have a monopoly on revenge or psychopathy either.
I think sociopaths are about 4% of the population, so your scenario isn’t really that implausible. I just meant if all of societies’ values change over time. Or just the FAI extracting out “true” utility function which includes all the negative stuff, like desire for revenge.
Yeah, someone made another reply to my question to that effect. Yudkowsky and the MIRI emphasize how, in the space of all possible minds a general machine intelligence might develop, the space which contains all human-like minds is very small. So, originally, I was thinking the chances a machine mind would torture living humans would be conditional upon a prior mind (human or other) programming it that way, which itself dependent upon a machine being built which even recognizes human feelings as mattering at all. The chances of all that happening seemed vanishingly small to me.
However, I could be overestimating the likelihood Yudkowsky’s predictions are correct. For example, Robin Hanson believes the outcome could be much different, without superintelligence going ‘foom’, and instead being based upon human brain-emulations (HBEs). Based on related topics, I’ve assumed the Yudkowsky-Hanson AI Foom debate is over my head. So, I haven’t read it yet. However, others more knowledgeable than I apparently notice merit it Hanson’s position and criticisms, including Luke Muehlhauser when I asked him a couple years ago. While the MIRI may approach safety-engineering in a way that doesn’t discriminate too much between the nature of technological singularity, they still could be wrong about it being an intelligence explosion. I don’t claim nobody can tell which type of singularity is more likely. I merely mean I’m agnostic on the subject until I (can) examine it better.
Anyway, a singularity more like the one Hanson predicts make it seem more likely AGI will notice human values, and could hurt us. For example, HBEs could be controlled by hostile minds, which care about hurting us much more than an AGI born from an intelligence explosion. I’m not confident now the likelihood of such scenarios is high enough so myself and others shouldn’t sign up for cryonics. I myself am still undecided about cryonics, and skeptical of aspects of the procedure(s). However, I at first believed this outcome was absurd. Like, I thought the scenario so ludicrous or contrived so as not to even assign a probability to such outcomes. That was indeed a failure of my imagination. I don’t know what probability to assign now to outcomes where I or others wake up and suffer immense torture at the hands of a hostile future. However, I (no longer) believe it should be utterly neglected in future calculations of the value of ‘getting froze’, or whatever.
I’m not saying its inevitable, but it’s a failure of imagination if you can’t think of any way the future can go horribly wrong like that.
My biggest concern is an AI or civilization that decides to create a real hell to punish people for their sins. Humans have pretty strong feelings towards wanting to punish those who did wrong, and our morality and views on punishment are constantly changing.
E.g. if slaveholder a were alive today, some people may want to see them tortured. In the future perhaps they will want to punish, hypothetically, meat eaters or people who weren’t as altruistic as possible, or something we can’t even conceive of.
Yeah, there are plenty of examples of dictators that go through great lengths to inflict tremendous amounts of pain on many people. It’s terrifying to think of someone like that in control of an AGI.
Granted, people like that probably tend to be less likely than the average head-of-state to find themselves in control of an AGI, since brutal dictators often have unhealthy economies, and are therefore unlikely to win an AGI race. But it’s not like they have a monopoly on revenge or psychopathy either.
I think sociopaths are about 4% of the population, so your scenario isn’t really that implausible. I just meant if all of societies’ values change over time. Or just the FAI extracting out “true” utility function which includes all the negative stuff, like desire for revenge.
Yeah, someone made another reply to my question to that effect. Yudkowsky and the MIRI emphasize how, in the space of all possible minds a general machine intelligence might develop, the space which contains all human-like minds is very small. So, originally, I was thinking the chances a machine mind would torture living humans would be conditional upon a prior mind (human or other) programming it that way, which itself dependent upon a machine being built which even recognizes human feelings as mattering at all. The chances of all that happening seemed vanishingly small to me.
However, I could be overestimating the likelihood Yudkowsky’s predictions are correct. For example, Robin Hanson believes the outcome could be much different, without superintelligence going ‘foom’, and instead being based upon human brain-emulations (HBEs). Based on related topics, I’ve assumed the Yudkowsky-Hanson AI Foom debate is over my head. So, I haven’t read it yet. However, others more knowledgeable than I apparently notice merit it Hanson’s position and criticisms, including Luke Muehlhauser when I asked him a couple years ago. While the MIRI may approach safety-engineering in a way that doesn’t discriminate too much between the nature of technological singularity, they still could be wrong about it being an intelligence explosion. I don’t claim nobody can tell which type of singularity is more likely. I merely mean I’m agnostic on the subject until I (can) examine it better.
Anyway, a singularity more like the one Hanson predicts make it seem more likely AGI will notice human values, and could hurt us. For example, HBEs could be controlled by hostile minds, which care about hurting us much more than an AGI born from an intelligence explosion. I’m not confident now the likelihood of such scenarios is high enough so myself and others shouldn’t sign up for cryonics. I myself am still undecided about cryonics, and skeptical of aspects of the procedure(s). However, I at first believed this outcome was absurd. Like, I thought the scenario so ludicrous or contrived so as not to even assign a probability to such outcomes. That was indeed a failure of my imagination. I don’t know what probability to assign now to outcomes where I or others wake up and suffer immense torture at the hands of a hostile future. However, I (no longer) believe it should be utterly neglected in future calculations of the value of ‘getting froze’, or whatever.