What I get from this:
Even if our morality were baked into math, our adoption of it is arbitrary.
A GAI is unlikely to be a pebblesorter.
A Pebblesorting AI would destroy the pebblesorters. (which in their case, they might be fine with, but they probably don’t understand the implications of what they’re asking for.)
Pebblesorters can’t make ‘friendly AI’. If it follows their morality it will kill them, if it doesn’t kill them then it isn’t optimally sorting pebbles.
But because I’m rather cemented to the idea that morality is baked into the universe, my thought was:
Friendly AI is AI designed not to follow it’s conscience. if it discovers that pebble sorting is right, it will still not to the right thing.
Also: This seems like a bit of a straw man, because pebblesorting is definitely arbitrary, there is not even an attempt to give a reason for it. I think people seriously working on morality are trying to overcome that, their suggestions reference goals and goal seeking. I’m not convinced that there can’t be baked in morality that is then adopted non-arbitrarilty.
I don’t understand your point about killing them. An AI with my utility function would certainly kill me. There are more efficient arrangements of matter to produce utility.
Keep reading the morality sequence, My comment came while I still had some confusions which are now dissolved.
I don’t know what you count on utility, but I think an AI with your utility function would preserve that which makes you ‘you’ . (it might do anything with your old matter.) At least until it was ready to do something more interesting with ‘you’ .
Pebble sorters value only things that were not pebble sorters, humans value humans, among other things
What I get from this: Even if our morality were baked into math, our adoption of it is arbitrary. A GAI is unlikely to be a pebblesorter. A Pebblesorting AI would destroy the pebblesorters. (which in their case, they might be fine with, but they probably don’t understand the implications of what they’re asking for.) Pebblesorters can’t make ‘friendly AI’. If it follows their morality it will kill them, if it doesn’t kill them then it isn’t optimally sorting pebbles.
But because I’m rather cemented to the idea that morality is baked into the universe, my thought was:
Friendly AI is AI designed not to follow it’s conscience. if it discovers that pebble sorting is right, it will still not to the right thing.
Also: This seems like a bit of a straw man, because pebblesorting is definitely arbitrary, there is not even an attempt to give a reason for it. I think people seriously working on morality are trying to overcome that, their suggestions reference goals and goal seeking. I’m not convinced that there can’t be baked in morality that is then adopted non-arbitrarilty.
I don’t understand your point about killing them. An AI with my utility function would certainly kill me. There are more efficient arrangements of matter to produce utility.
Keep reading the morality sequence, My comment came while I still had some confusions which are now dissolved.
I don’t know what you count on utility, but I think an AI with your utility function would preserve that which makes you ‘you’ . (it might do anything with your old matter.) At least until it was ready to do something more interesting with ‘you’ .
Pebble sorters value only things that were not pebble sorters, humans value humans, among other things