You can put her potential actions into “More Likely” & “Less Likely” boxes, but you can’t predict them with any certainty. What if the guy was the rapist she’s been plotting revenge against since she was 7 years old?
That would be in the “More Likely” bucket, or rather an “Extremely Likely” bucket. You said that the girl would say “hello” & that is in the “More Likely” bucket too, but far from a certainty. She could ignore him, turn the other way, poke him in the stomach, or do any of an almost infinite other things. Either way, you’re resorting to insults & I’ve barely engaged with you, so I’m going to ignore you from here on out.
you’re resorting to insults & I’ve barely engaged with you, so I’m going to ignore you from here on out.
If you had to guess, would you say you’re probably ignoring Rolf to protect your epistemically null feelings, or to protect your epistemology? (In terms of the actual cognitive mechanism causally responsible for your avoidance, not primarily in terms of your explicit linguistic reason.)
This statement is true but not relevant, because it doesn’t demonstrate a disanalogy between the woman and Deep Blue. In both cases we can only reason probabilistically with what we expect to have happen. This is true even if our knowledge of the software of Deep Blue or the neural state of the woman is so perfect that we can predict with near-certainty that it would take a physics-breaking miracle for anything other than X to occur. This doesn’t suffice for ‘certainty’ because we don’t have true certainty regarding physics or regarding the experiences that led to our understanding Deep Blue’s algorithms or the woman’s brain.
I would gather we have much more certainty about Deep Blue’s algorithms considering that we built them. You’re getting into hypothetical territory assuming that we can obtain near perfect knowledge of the human brain & that the neural state is all we need to predict future human behavior.
I would gather we have much more certainty about Deep Blue’s algorithms considering that we built them.
And you’d gather wrong. Our confidence that the woman says “hello” (and a fortiori our confidence that she does not take a gun and blow the man’s head off) exceeds our confidence that Deep Blue will make a particular chess move in response to most common plays by several couple orders of magnitude.
You’re getting into hypothetical territory assuming that we can obtain near perfect knowledge of the human brain & that the neural state is all we need to predict future human behavior.
We started off well into hypothetical territory, back when Stuart brought Clippy into his thought experiment. Within that territory, I’m trying to steer us away from the shoals of irrelevance by countering your hypothetical (‘but what if [insert unlikely scenario here]? see, humans can’t be predicted sometimes! therefore they are Unpredictable!’) with another hypothetical. But all of this still leaves us within sight of the shoals.
You’re missing the point, which is not that humans are perfectly predictable by other humans to arbitrarily high precision and in arbitrarily contrived scenarios, but that our evolved intuitions are vastly less reliable when predicting AI conduct from an armchair than when predicting human conduct from an armchair. That, and our explicit scientific knowledge of cognitive algorithms is too limited to get us very far with any complex agent. The best we could do is build a second Deep Blue to simulate the behavior of the first Deep Blue.
I’m not trying to argue that humans are completely unpredictable, but neither are AIs. If they were, there’d be no point in trying to design a friendly one.
About your point that humans are less able to predict AI behavior than human behavior, where are you getting those numbers from? I’m not saying that you’re wrong, I’m just skeptical that someone has studied the frequency of girls saying hello to strangers. Deep Blue has probably been studied pretty thoroughly; it’d be interesting to read about how unpredictable Deep Blue’s moves are.
Right. And I’m not trying to argue that we should despair of building a friendly AI, or of identifying friendliness. I’m just noting that the default is for AI behavior to be much harder than human behavior for humans to predict and understand. This is especially true for intelligences constructed through whole-brain emulation, evolutionary algorithms, or other relatively complex and autonomous processes.
It should be possible for us to mitigate the risk, but actually doing so may be one of the most difficult tasks humans have ever attempted, and is certainly one of the most consequential.
Let’s make this easy. Do you think the probability of a person saying “hello” to a stranger who just said “hello” to him/her is less than 10%? Do you think you can predict Deep Blue’s moves with greater than 10% confidence?
Deep Blue’s moves are, minimally, unpredictable enough to allow it to consistently outsmart the smartest and best-trained humans in the world in its domain. The comparison is almost unfair, because unpredictability is selected for in Deep Blue’s natural response to chess positions, whereas predictability is strongly selected for in human social conduct. If we can’t even come to an agreement on this incredibly simple base case—if we can’t even agree, for instance, that people greet each other with ‘hi!’ with higher frequency than Deep Blue executes a particular gambit—then talking about much harder cases will be unproductive.
I really don’t know the probability of a person saying hello to a stranger who said hello to them. It depends on too many factors, like the look & vibe of the stranger, the history of the person being said hello to, etc.
Given a time constraint, I’d agree that I’d be more likely to predict that the girl would reply hello than to predict Deep Blue’s next move, but if there were not a time constraint, I think Deep Blue’s moves would be almost 100% predictable. The reason being that all that Deep Blue does is calculate, it doesn’t consult its feelings before deciding what to do like a human might. It calculates 200 million positions per second to determine what the end result of any sequence of chess moves will be. If you gave a human enough time, I don’t see why they couldn’t perform the same calculation & come to the same conclusion that Deep Blue would.
Edit:
Reading more about Deep Blue, it sounds like it is not as straightforward as just calculating. There is some wiggle room in there based on the order in which its nodes talk to one another. It won’t always play the same move given the same board positioning. Really fascinating! Thanks for engaging politely, it motivated me to investigate this more & I’m glad I did.
I really don’t know the probability of a person saying hello to a stranger who said hello to them. It depends on too many factors
I’m not asking for the probability. I’m asking for your probability—the confidence you have that the event will occur. If you have very little confidence one way or the other, that doesn’t mean you assign no probability to it; it means you assign ~50% probability to it.
Everything in life depends on too many factors. If you couldn’t make predictions or decisions under uncertainty, then you wouldn’t even be able to cross the street. Fortunately, a lot of those factors cancel out or are extremely unlikely, which means that in many cases (including this one) we can make approximately reliable predictions using only a few pieces of information.
but if there were not a time constraint, I think Deep Blue’s moves would be almost 100% predictable.
Without a time constraint, the same may be true for the girl (especially if cryonics is feasible), since given enough time we’d be able to scan her brain and run thousands of simulations of what she’d do in this scenario. If you’re averse to unlikely hypotheticals, then you should be averse to removing realistic constraints.
Ok. Take a chess position. Deep Blue is playing black. What is its next move?
A girl is walking down the street. A guy comes up to her, says hello. What’s her next move?
She says “hello” and moves right on. She does not pull out a gun and blow his head off. Now, back to Deep Blue.
You can put her potential actions into “More Likely” & “Less Likely” boxes, but you can’t predict them with any certainty. What if the guy was the rapist she’s been plotting revenge against since she was 7 years old?
What if the chess position is mate in one move? Cases that are sufficiently special to ride the short bus do not make a general argument.
That would be in the “More Likely” bucket, or rather an “Extremely Likely” bucket. You said that the girl would say “hello” & that is in the “More Likely” bucket too, but far from a certainty. She could ignore him, turn the other way, poke him in the stomach, or do any of an almost infinite other things. Either way, you’re resorting to insults & I’ve barely engaged with you, so I’m going to ignore you from here on out.
If you had to guess, would you say you’re probably ignoring Rolf to protect your epistemically null feelings, or to protect your epistemology? (In terms of the actual cognitive mechanism causally responsible for your avoidance, not primarily in terms of your explicit linguistic reason.)
I’m trying to protect Rolf because he can’t seem to interact with others without lashing out at them abusively.
This statement is true but not relevant, because it doesn’t demonstrate a disanalogy between the woman and Deep Blue. In both cases we can only reason probabilistically with what we expect to have happen. This is true even if our knowledge of the software of Deep Blue or the neural state of the woman is so perfect that we can predict with near-certainty that it would take a physics-breaking miracle for anything other than X to occur. This doesn’t suffice for ‘certainty’ because we don’t have true certainty regarding physics or regarding the experiences that led to our understanding Deep Blue’s algorithms or the woman’s brain.
I would gather we have much more certainty about Deep Blue’s algorithms considering that we built them. You’re getting into hypothetical territory assuming that we can obtain near perfect knowledge of the human brain & that the neural state is all we need to predict future human behavior.
And you’d gather wrong. Our confidence that the woman says “hello” (and a fortiori our confidence that she does not take a gun and blow the man’s head off) exceeds our confidence that Deep Blue will make a particular chess move in response to most common plays by several couple orders of magnitude.
We started off well into hypothetical territory, back when Stuart brought Clippy into his thought experiment. Within that territory, I’m trying to steer us away from the shoals of irrelevance by countering your hypothetical (‘but what if [insert unlikely scenario here]? see, humans can’t be predicted sometimes! therefore they are Unpredictable!’) with another hypothetical. But all of this still leaves us within sight of the shoals.
You’re missing the point, which is not that humans are perfectly predictable by other humans to arbitrarily high precision and in arbitrarily contrived scenarios, but that our evolved intuitions are vastly less reliable when predicting AI conduct from an armchair than when predicting human conduct from an armchair. That, and our explicit scientific knowledge of cognitive algorithms is too limited to get us very far with any complex agent. The best we could do is build a second Deep Blue to simulate the behavior of the first Deep Blue.
I’m not trying to argue that humans are completely unpredictable, but neither are AIs. If they were, there’d be no point in trying to design a friendly one.
About your point that humans are less able to predict AI behavior than human behavior, where are you getting those numbers from? I’m not saying that you’re wrong, I’m just skeptical that someone has studied the frequency of girls saying hello to strangers. Deep Blue has probably been studied pretty thoroughly; it’d be interesting to read about how unpredictable Deep Blue’s moves are.
Right. And I’m not trying to argue that we should despair of building a friendly AI, or of identifying friendliness. I’m just noting that the default is for AI behavior to be much harder than human behavior for humans to predict and understand. This is especially true for intelligences constructed through whole-brain emulation, evolutionary algorithms, or other relatively complex and autonomous processes.
It should be possible for us to mitigate the risk, but actually doing so may be one of the most difficult tasks humans have ever attempted, and is certainly one of the most consequential.
Let’s make this easy. Do you think the probability of a person saying “hello” to a stranger who just said “hello” to him/her is less than 10%? Do you think you can predict Deep Blue’s moves with greater than 10% confidence?
Deep Blue’s moves are, minimally, unpredictable enough to allow it to consistently outsmart the smartest and best-trained humans in the world in its domain. The comparison is almost unfair, because unpredictability is selected for in Deep Blue’s natural response to chess positions, whereas predictability is strongly selected for in human social conduct. If we can’t even come to an agreement on this incredibly simple base case—if we can’t even agree, for instance, that people greet each other with ‘hi!’ with higher frequency than Deep Blue executes a particular gambit—then talking about much harder cases will be unproductive.
I really don’t know the probability of a person saying hello to a stranger who said hello to them. It depends on too many factors, like the look & vibe of the stranger, the history of the person being said hello to, etc.
Given a time constraint, I’d agree that I’d be more likely to predict that the girl would reply hello than to predict Deep Blue’s next move, but if there were not a time constraint, I think Deep Blue’s moves would be almost 100% predictable. The reason being that all that Deep Blue does is calculate, it doesn’t consult its feelings before deciding what to do like a human might. It calculates 200 million positions per second to determine what the end result of any sequence of chess moves will be. If you gave a human enough time, I don’t see why they couldn’t perform the same calculation & come to the same conclusion that Deep Blue would.
Edit:
Reading more about Deep Blue, it sounds like it is not as straightforward as just calculating. There is some wiggle room in there based on the order in which its nodes talk to one another. It won’t always play the same move given the same board positioning. Really fascinating! Thanks for engaging politely, it motivated me to investigate this more & I’m glad I did.
I’m not asking for the probability. I’m asking for your probability—the confidence you have that the event will occur. If you have very little confidence one way or the other, that doesn’t mean you assign no probability to it; it means you assign ~50% probability to it.
Everything in life depends on too many factors. If you couldn’t make predictions or decisions under uncertainty, then you wouldn’t even be able to cross the street. Fortunately, a lot of those factors cancel out or are extremely unlikely, which means that in many cases (including this one) we can make approximately reliable predictions using only a few pieces of information.
Without a time constraint, the same may be true for the girl (especially if cryonics is feasible), since given enough time we’d be able to scan her brain and run thousands of simulations of what she’d do in this scenario. If you’re averse to unlikely hypotheticals, then you should be averse to removing realistic constraints.