My idea there was that if it’s not Friendly, then it’s not Friendly, ergo it is doing something that you would not want an AI to be doing(if you thought faster and knew more and all that). That’s the core of the quote you had there. Random intelligent agent would simply transform us into something of value, so we would most likely die very quickly. However, when you get closer to the Friendliness, Ai is no longer totally indifferent to us, but rather, is maximizing something that could involve living humans. Now, if you take an AI that wants there to be living humans around, but is not known for sure to be Friendly, what could go wrong? My answer, many things, as what humans prefer to be doing is rather complex set of stuff, and even quite little changes could make us really, really unsatisfied with the end result. At least, that’s the idea I’ve gotten from posts here like Value is Fragile.
When you ask if Friendliness is an attractor, do you mean to ask if intelligences near Friendly ones in the design spaces tend to transform into Friendly ones? This seems rather unlikely, as that sort of AI’s most likely are capable of preserving their utility function, and the direction of this transformation is not “natural”. For these reasons, arriving at the Friendliness is not easy, and thus, I’d say you gotta have some sort of a way to ascertain the Friendliness before you can trust it to be just that.
My idea there was that if it’s not Friendly, then it’s not Friendly, ergo it is doing something that you would not want an AI to be doing(if you thought faster and knew more and all that). That’s the core of the quote you had there. Random intelligent agent would simply transform us into something of value, so we would most likely die very quickly. However, when you get closer to the Friendliness, Ai is no longer totally indifferent to us, but rather, is maximizing something that could involve living humans. Now, if you take an AI that wants there to be living humans around, but is not known for sure to be Friendly, what could go wrong? My answer, many things, as what humans prefer to be doing is rather complex set of stuff, and even quite little changes could make us really, really unsatisfied with the end result. At least, that’s the idea I’ve gotten from posts here like Value is Fragile.
When you ask if Friendliness is an attractor, do you mean to ask if intelligences near Friendly ones in the design spaces tend to transform into Friendly ones? This seems rather unlikely, as that sort of AI’s most likely are capable of preserving their utility function, and the direction of this transformation is not “natural”. For these reasons, arriving at the Friendliness is not easy, and thus, I’d say you gotta have some sort of a way to ascertain the Friendliness before you can trust it to be just that.