The assumption of a single AI comes from an assumption that an AI will have zero risk tolerance. It follows from that assumption that the most powerful AI will destroy or limit all other sentient beings within reach.
There’s no reason that an AI couldn’t be programmed to have tolerance for risk. Pursuing a lot of the more noble human values may require it.
I make no claim that Eliezer and/or the SIAI have anything like this in mind. It seems that they would like to build an absolutist AI. I find that very troubling.
I make no claim that Eliezer and/or the SIAI have anything like this in mind. It seems that they would like to build an absolutist AI. I find that very troubling.
If I thought they had settled on this and that they were likely to succeed I would probably feel it was very important to work to destroy them. I’m currently not sure about the first and think the second is highly unlikely so it is not a pressing concern.
The assumption of a single AI comes from an assumption that an AI will have zero risk tolerance. It follows from that assumption that the most powerful AI will destroy or limit all other sentient beings within reach.
There’s no reason that an AI couldn’t be programmed to have tolerance for risk. Pursuing a lot of the more noble human values may require it.
I make no claim that Eliezer and/or the SIAI have anything like this in mind. It seems that they would like to build an absolutist AI. I find that very troubling.
If I thought they had settled on this and that they were likely to succeed I would probably feel it was very important to work to destroy them. I’m currently not sure about the first and think the second is highly unlikely so it is not a pressing concern.