Right. And I’m not trying to argue that we should despair of building a friendly AI, or of identifying friendliness. I’m just noting that the default is for AI behavior to be much harder than human behavior for humans to predict and understand. This is especially true for intelligences constructed through whole-brain emulation, evolutionary algorithms, or other relatively complex and autonomous processes.
It should be possible for us to mitigate the risk, but actually doing so may be one of the most difficult tasks humans have ever attempted, and is certainly one of the most consequential.
Let’s make this easy. Do you think the probability of a person saying “hello” to a stranger who just said “hello” to him/her is less than 10%? Do you think you can predict Deep Blue’s moves with greater than 10% confidence?
Deep Blue’s moves are, minimally, unpredictable enough to allow it to consistently outsmart the smartest and best-trained humans in the world in its domain. The comparison is almost unfair, because unpredictability is selected for in Deep Blue’s natural response to chess positions, whereas predictability is strongly selected for in human social conduct. If we can’t even come to an agreement on this incredibly simple base case—if we can’t even agree, for instance, that people greet each other with ‘hi!’ with higher frequency than Deep Blue executes a particular gambit—then talking about much harder cases will be unproductive.
I really don’t know the probability of a person saying hello to a stranger who said hello to them. It depends on too many factors, like the look & vibe of the stranger, the history of the person being said hello to, etc.
Given a time constraint, I’d agree that I’d be more likely to predict that the girl would reply hello than to predict Deep Blue’s next move, but if there were not a time constraint, I think Deep Blue’s moves would be almost 100% predictable. The reason being that all that Deep Blue does is calculate, it doesn’t consult its feelings before deciding what to do like a human might. It calculates 200 million positions per second to determine what the end result of any sequence of chess moves will be. If you gave a human enough time, I don’t see why they couldn’t perform the same calculation & come to the same conclusion that Deep Blue would.
Edit:
Reading more about Deep Blue, it sounds like it is not as straightforward as just calculating. There is some wiggle room in there based on the order in which its nodes talk to one another. It won’t always play the same move given the same board positioning. Really fascinating! Thanks for engaging politely, it motivated me to investigate this more & I’m glad I did.
I really don’t know the probability of a person saying hello to a stranger who said hello to them. It depends on too many factors
I’m not asking for the probability. I’m asking for your probability—the confidence you have that the event will occur. If you have very little confidence one way or the other, that doesn’t mean you assign no probability to it; it means you assign ~50% probability to it.
Everything in life depends on too many factors. If you couldn’t make predictions or decisions under uncertainty, then you wouldn’t even be able to cross the street. Fortunately, a lot of those factors cancel out or are extremely unlikely, which means that in many cases (including this one) we can make approximately reliable predictions using only a few pieces of information.
but if there were not a time constraint, I think Deep Blue’s moves would be almost 100% predictable.
Without a time constraint, the same may be true for the girl (especially if cryonics is feasible), since given enough time we’d be able to scan her brain and run thousands of simulations of what she’d do in this scenario. If you’re averse to unlikely hypotheticals, then you should be averse to removing realistic constraints.
Right. And I’m not trying to argue that we should despair of building a friendly AI, or of identifying friendliness. I’m just noting that the default is for AI behavior to be much harder than human behavior for humans to predict and understand. This is especially true for intelligences constructed through whole-brain emulation, evolutionary algorithms, or other relatively complex and autonomous processes.
It should be possible for us to mitigate the risk, but actually doing so may be one of the most difficult tasks humans have ever attempted, and is certainly one of the most consequential.
Let’s make this easy. Do you think the probability of a person saying “hello” to a stranger who just said “hello” to him/her is less than 10%? Do you think you can predict Deep Blue’s moves with greater than 10% confidence?
Deep Blue’s moves are, minimally, unpredictable enough to allow it to consistently outsmart the smartest and best-trained humans in the world in its domain. The comparison is almost unfair, because unpredictability is selected for in Deep Blue’s natural response to chess positions, whereas predictability is strongly selected for in human social conduct. If we can’t even come to an agreement on this incredibly simple base case—if we can’t even agree, for instance, that people greet each other with ‘hi!’ with higher frequency than Deep Blue executes a particular gambit—then talking about much harder cases will be unproductive.
I really don’t know the probability of a person saying hello to a stranger who said hello to them. It depends on too many factors, like the look & vibe of the stranger, the history of the person being said hello to, etc.
Given a time constraint, I’d agree that I’d be more likely to predict that the girl would reply hello than to predict Deep Blue’s next move, but if there were not a time constraint, I think Deep Blue’s moves would be almost 100% predictable. The reason being that all that Deep Blue does is calculate, it doesn’t consult its feelings before deciding what to do like a human might. It calculates 200 million positions per second to determine what the end result of any sequence of chess moves will be. If you gave a human enough time, I don’t see why they couldn’t perform the same calculation & come to the same conclusion that Deep Blue would.
Edit:
Reading more about Deep Blue, it sounds like it is not as straightforward as just calculating. There is some wiggle room in there based on the order in which its nodes talk to one another. It won’t always play the same move given the same board positioning. Really fascinating! Thanks for engaging politely, it motivated me to investigate this more & I’m glad I did.
I’m not asking for the probability. I’m asking for your probability—the confidence you have that the event will occur. If you have very little confidence one way or the other, that doesn’t mean you assign no probability to it; it means you assign ~50% probability to it.
Everything in life depends on too many factors. If you couldn’t make predictions or decisions under uncertainty, then you wouldn’t even be able to cross the street. Fortunately, a lot of those factors cancel out or are extremely unlikely, which means that in many cases (including this one) we can make approximately reliable predictions using only a few pieces of information.
Without a time constraint, the same may be true for the girl (especially if cryonics is feasible), since given enough time we’d be able to scan her brain and run thousands of simulations of what she’d do in this scenario. If you’re averse to unlikely hypotheticals, then you should be averse to removing realistic constraints.