On the other hand, you think I’m mistaken about that.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.
So if you went in to work and nobody was there, and your computer says it’s Saturday, and your watch says Saturday, and the next thirty people you ask say it’s Saturday… you would still believe it’s Friday?
If you think it’s Saturday after any amount of evidence, after assigning probability 1 to the statement “Today is Friday,” then you can’t be doing anything vaguely rational—no amount of Bayesian updating will allow you to update away from probability 1.
If you ever assign something probability 1, you can never be rationally convinced of its falsehood.
Sure. But by definition they are irrational kludges made by human brains.
Bayesian updating is a theorem of probability: it is literally the formal definition of “rationally changing your mind.” If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
The brain is simply not the optimal processing engine given the resources of the human body
How do you define optimality?
So I see no reason to pander to its biases when I can use mathematics
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
investors and/or traders would be more rational than the average
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
As to optimality, unless you define it somehow the phrase “brain is not optimal” has no meaning.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).
it is literally the formal definition of “rationally changing your mind.”
No, unless you define “rationally changing your mind” this way in which case it’s just a circle.
If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
Nope.
The ultimate criterion of whether the answer is the right one is real life.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity.
While I’m not certain, I’m fairly confident that most people’s minds don’t assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it’s not Friday, then that would suffice to prove that your mind’s internal probability is not Friday.
Most of the time, when people talk about probabilities or state the probabilities they assign to something, they’re talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they’re still just basically guesses at what’s going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.
Nope. I assign it the probability of 1.
On the other hand, you think I’m mistaken about that.
On the third tentacle I think you are mistaken because, among other things, my mind does not assign probabilities like 0.999999999 -- it’s not capable of such granularity. My wetware rounds such numbers and so assigns the probability of 1 to the statement that today is Friday.
So if you went in to work and nobody was there, and your computer says it’s Saturday, and your watch says Saturday, and the next thirty people you ask say it’s Saturday… you would still believe it’s Friday?
If you think it’s Saturday after any amount of evidence, after assigning probability 1 to the statement “Today is Friday,” then you can’t be doing anything vaguely rational—no amount of Bayesian updating will allow you to update away from probability 1.
If you ever assign something probability 1, you can never be rationally convinced of its falsehood.
That’s not true. There are ways to change your mind other than through Bayesian updating.
Sure. But by definition they are irrational kludges made by human brains.
Bayesian updating is a theorem of probability: it is literally the formal definition of “rationally changing your mind.” If you’re changing your mind through something that isn’t Bayesian, you will get the right answer iff your method gives the same result as the Bayesian one; otherwise you’re just wrong.
The original point was that human brains are not all Bayesian agents. (Specifically, that they could be completely certain of something)
… Okay?
Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
I wasn’t restricting the domain to the brains of people who intrinsically value being rational agents.
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
Given that here rationality is often defined as winning, it seems to me you think natural selection works in opposite direction.
… Um. No?
I might have been a little hyperbolic there—the brain is meant to model the world—but...
Okay, look, have you read the Sequences on evolution? Because Eliezer makes the point much better than I can as of yet.
Regardless of EY, what is your point? What are you trying to express?
*sigh*
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
How do you define optimality?
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
That is true.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).
Yes.
No, unless you define “rationally changing your mind” this way in which case it’s just a circle.
Nope.
The ultimate criterion of whether the answer is the right one is real life.
While I’m not certain, I’m fairly confident that most people’s minds don’t assign probabilities at all. At least when this thread began, it was about trying to infer implicit probabilities based on how people update their beliefs; if there is any situation that would lead you to conclude that it’s not Friday, then that would suffice to prove that your mind’s internal probability is not Friday.
Most of the time, when people talk about probabilities or state the probabilities they assign to something, they’re talking about loose, verbal estimates, which are created by their conscious minds. There are various techniques for trying to make these match up to the evidence the person has, but in the end they’re still just basically guesses at what’s going on in your subconscious. Your conscious mind is capable of assigning probabilities like 0.999999999.