If a human was artificial, would it be considered FAI or UAI? I’m guessing UAI because I don’t think anything like the process of CEV has been followed to set human’s values at birth.
If a human would be UAI if artificial, why are we less worried about billions of humans than we are about 1 UAI? What is it about being artificial that makes unfriendliness so scary? What is it about being natural that makes us so blind to the possible dangers of unfriendliness?
It is that we don’t think humans can self-modify? The way tech is going it seems to me that its at least a horse-race (approximately 50:50 probability) as to which will FOOM first: the ability for humans to enhance themselves vs. the ability for an AI to modify itself.
Should we be more worried about UNI, unfriendly natural intelligence, meaning are we optimally dividing our efforts between avoiding UAI vs avoiding UNI given the relative probability weighted dangers each presents?
We’re unFriendly, but we’re unFriendly in a weaker sense than we normally talk about around here: we bear some relationship to the implicit human ethics that we’d want an FAI to uphold, though not a perfect or complete one, and we probably implement a subset of the features that could be used to create a version of Friendliness. Most of us also seem somewhat resistant to the more obvious cognitive traps like wireheading. We’re not there yet, but we’re far further along the road to Friendliness than most points in mind-space.
We also have some built-in limitations that make a hard takeoff difficult for us: though we can self-modify (in a suitably general sense), our architecture is so messy that it’s not fast or easy, especially on individuals. And we run on hardware with a very slow cycle time, although it does parallelize very, very well.
More colloquially, given the kind of power that we talk about FAI eventually having, an arbitrary human or set of humans might use it to make giant golden statues of themselves or carve their dog’s face into the moon, but probably wouldn’t convert the world’s biomass to paperclips. Maybe. I hope.
Humans would be considered UFAI if they were digitised. Merely consider a button that picks a random human and gives them absolute control. I wouldn’t press that button because their is a significant chance that such a person will have goals that significantly differed from my own.
A self enhancing human will still be hugely slower than a self enhancing AI. Compare the progress in computing power and software to the progress in human prosthetics. Computers don’t die if you plug in a new RAM card (unless you’re very unlucky). You can run a newly optimized algorithm on a computer in a way that you just can’t do on an organic brain. If we get to a point where “humans” are as easy to optimize as computers then they won’t be humans as we know them anyway.
If a human was artificial, would it be considered FAI or UAI? I’m guessing UAI because I don’t think anything like the process of CEV has been followed to set human’s values at birth.
If a human would be UAI if artificial, why are we less worried about billions of humans than we are about 1 UAI? What is it about being artificial that makes unfriendliness so scary? What is it about being natural that makes us so blind to the possible dangers of unfriendliness?
It is that we don’t think humans can self-modify? The way tech is going it seems to me that its at least a horse-race (approximately 50:50 probability) as to which will FOOM first: the ability for humans to enhance themselves vs. the ability for an AI to modify itself.
Should we be more worried about UNI, unfriendly natural intelligence, meaning are we optimally dividing our efforts between avoiding UAI vs avoiding UNI given the relative probability weighted dangers each presents?
We’re unFriendly, but we’re unFriendly in a weaker sense than we normally talk about around here: we bear some relationship to the implicit human ethics that we’d want an FAI to uphold, though not a perfect or complete one, and we probably implement a subset of the features that could be used to create a version of Friendliness. Most of us also seem somewhat resistant to the more obvious cognitive traps like wireheading. We’re not there yet, but we’re far further along the road to Friendliness than most points in mind-space.
We also have some built-in limitations that make a hard takeoff difficult for us: though we can self-modify (in a suitably general sense), our architecture is so messy that it’s not fast or easy, especially on individuals. And we run on hardware with a very slow cycle time, although it does parallelize very, very well.
More colloquially, given the kind of power that we talk about FAI eventually having, an arbitrary human or set of humans might use it to make giant golden statues of themselves or carve their dog’s face into the moon, but probably wouldn’t convert the world’s biomass to paperclips. Maybe. I hope.
Humans would be considered UFAI if they were digitised. Merely consider a button that picks a random human and gives them absolute control. I wouldn’t press that button because their is a significant chance that such a person will have goals that significantly differed from my own.
A self enhancing human will still be hugely slower than a self enhancing AI. Compare the progress in computing power and software to the progress in human prosthetics. Computers don’t die if you plug in a new RAM card (unless you’re very unlucky). You can run a newly optimized algorithm on a computer in a way that you just can’t do on an organic brain. If we get to a point where “humans” are as easy to optimize as computers then they won’t be humans as we know them anyway.