Not really. An AI that didn’t have a specific desire to be friendly to mankind would want to kill us to cut down on unnecessary entropy increases.
As you get closer to the mark, with AGI’s that have utility function that roughly resembles what we would want, but is still wrong, the end results are most likely worse than death. Especially since there should be much more near-misses than exact hits. Like, AGI that doesn’t want to let you die, regardless of what you go through, and little regard to your other sort of well-being, would be closer to the FAI than paperclip maximizer that would just plain kill you. As you get closer to the core of friendliness, you get all sorts of weird AGI’s that want to do something that twistedly resembles something good, but is somehow missing something or is somehow altered so that the end result is not at all what you wanted.
As you get closer to the core of friendliness, you get all sorts of weird AGI’s that want to do something that twistedly resembles something good, but is somehow missing something or is somehow altered so that the end result is not at all what you wanted.
Is this true or is this a useful assumption to protect us from doing something stupid?
Is it true that Friendliness is not an attractor or is it that we cannot count on such a property unless it is absolutely proven to be the case?
My idea there was that if it’s not Friendly, then it’s not Friendly, ergo it is doing something that you would not want an AI to be doing(if you thought faster and knew more and all that). That’s the core of the quote you had there. Random intelligent agent would simply transform us into something of value, so we would most likely die very quickly. However, when you get closer to the Friendliness, Ai is no longer totally indifferent to us, but rather, is maximizing something that could involve living humans. Now, if you take an AI that wants there to be living humans around, but is not known for sure to be Friendly, what could go wrong? My answer, many things, as what humans prefer to be doing is rather complex set of stuff, and even quite little changes could make us really, really unsatisfied with the end result. At least, that’s the idea I’ve gotten from posts here like Value is Fragile.
When you ask if Friendliness is an attractor, do you mean to ask if intelligences near Friendly ones in the design spaces tend to transform into Friendly ones? This seems rather unlikely, as that sort of AI’s most likely are capable of preserving their utility function, and the direction of this transformation is not “natural”. For these reasons, arriving at the Friendliness is not easy, and thus, I’d say you gotta have some sort of a way to ascertain the Friendliness before you can trust it to be just that.
As you get closer to the mark, with AGI’s that have utility function that roughly resembles what we would want, but is still wrong, the end results are most likely worse than death. Especially since there should be much more near-misses than exact hits. Like, AGI that doesn’t want to let you die, regardless of what you go through, and little regard to your other sort of well-being, would be closer to the FAI than paperclip maximizer that would just plain kill you. As you get closer to the core of friendliness, you get all sorts of weird AGI’s that want to do something that twistedly resembles something good, but is somehow missing something or is somehow altered so that the end result is not at all what you wanted.
Is this true or is this a useful assumption to protect us from doing something stupid?
Is it true that Friendliness is not an attractor or is it that we cannot count on such a property unless it is absolutely proven to be the case?
My idea there was that if it’s not Friendly, then it’s not Friendly, ergo it is doing something that you would not want an AI to be doing(if you thought faster and knew more and all that). That’s the core of the quote you had there. Random intelligent agent would simply transform us into something of value, so we would most likely die very quickly. However, when you get closer to the Friendliness, Ai is no longer totally indifferent to us, but rather, is maximizing something that could involve living humans. Now, if you take an AI that wants there to be living humans around, but is not known for sure to be Friendly, what could go wrong? My answer, many things, as what humans prefer to be doing is rather complex set of stuff, and even quite little changes could make us really, really unsatisfied with the end result. At least, that’s the idea I’ve gotten from posts here like Value is Fragile.
When you ask if Friendliness is an attractor, do you mean to ask if intelligences near Friendly ones in the design spaces tend to transform into Friendly ones? This seems rather unlikely, as that sort of AI’s most likely are capable of preserving their utility function, and the direction of this transformation is not “natural”. For these reasons, arriving at the Friendliness is not easy, and thus, I’d say you gotta have some sort of a way to ascertain the Friendliness before you can trust it to be just that.