Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
The brain is simply not the optimal processing engine given the resources of the human body
How do you define optimality?
So I see no reason to pander to its biases when I can use mathematics
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
investors and/or traders would be more rational than the average
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
As to optimality, unless you define it somehow the phrase “brain is not optimal” has no meaning.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).
… Okay?
Okay, so, this looks like a case of arguing over semantics.
What I am saying is: “You can never correctly give probability 1 to something, and changing your mind in a non-Bayesian manner is simply incorrect. Assuming you endeavor to be /cough/ Less Wrong, you should force your System 2 to abide by these rules.”
What I think Lumifer is saying is, “Yes, but you’re never going to succeed because human brains are crazy kludges in the first place.”
In which case we have no disagreement, though I would note that I intend to do as well as I can.
I wasn’t restricting the domain to the brains of people who intrinsically value being rational agents.
I am sorry, I must have been unclear. I’m not staying “yes, but”, I’m saying “no, I disagree”.
I disagree that “you can never correctly give probability 1 to something”. To avoid silly debates over 1/3^^^3 chances I’d state my position as “you can correctly assign a probability that is indistinguishable from 1 to something”.
I disagree that “changing your mind in a non-Bayesian manner is simply incorrect”. That looks to me like an overbroad claim that’s false on its face. Human mind is rich and multifaceted, trying to limit it to performing a trivial statistical calculation doesn’t seem reasonable to me.
I think the claim is that, whatever method you use, it should approximate the answer the Bayesian method would use (which is optimal, but computationally infeasible)
The thing is, from a probabilistic standpoint, one is essentially infinity—it takes an infinite number of bits of evidence to get probability 1 from any finite prior.
And the human mind is a horrific repurposed adaptation not at all intended to do what we’re doing with it when we try to be rational. I fail to see why indulging its biases is at all helpful.
Given that here rationality is often defined as winning, it seems to me you think natural selection works in opposite direction.
… Um. No?
I might have been a little hyperbolic there—the brain is meant to model the world—but...
Okay, look, have you read the Sequences on evolution? Because Eliezer makes the point much better than I can as of yet.
Regardless of EY, what is your point? What are you trying to express?
*sigh*
My point, as I stated the first time, is that evolution is dumb, and does not necessarily design optimal systems. See: optic nerve connecting to the front of the retina. This is doubly true of very important, very complex systems like the brain, where everything has to be laid down layer by layer and changing some system after the fact might make the whole thing come crumbling down. The brain is simply not the optimal processing engine given the resources of the human body: it’s Azathoth’s “best guess.”
So I see no reason to pander to its biases when I can use mathematics, which I trust infinitely more, to prove that there is a rational way to make decisions.
How do you define optimality?
LOL.
Sorry :-/
So, since you seem to be completely convinced of the advantage of the mathematical “optimal processing” over the usual biased and messy thinking that humans normally do—could you, um, demonstrate this advantage? For example financial markets provide rapid feedback and excellent incentives. It shouldn’t be hard to exploit some cognitive bias or behavioral inefficiency on the part of investors and/or traders, should it? After all their brains are so horribly inefficient, to the point of being crippled, really...
Actually, no, I would expect that investors and/or traders would be more rational than the average for that very reason. The brain can be trained, or I wouldn’t be here; that doesn’t say much about it’s default configuration, though.
As far as biases—how about the existence of religion? The fact that people still deny evolution? The fact that people buy lottery tickets?
And as far as optimality goes—it’s an open question, I don’t know. I do, however, believe that the brain is not optimal, because it’s a very complex system that hasn’t had much time to be refined.
That’s not good enough—you can “use mathematics” and that gives you THE optimal result, the very best possible—right? As such, anything not the best possible is inferior, even if it’s better than the average. So by being purely rational you still should be able to extract money out of the market taking it from investors who are merely better than the not-too-impressive average.
As to optimality, unless you define it *somehow* the phrase “brain is not optimal” has no meaning.
That is true.
I am not perfectly rational. I do not have access to all the information I have. That is why am I here: to be Less Wrong.
Now, I can attempt to use Bayes’ Theorem on my own lack-of-knowledge, and predict probabilities of probabilities—calibrate myself, and learn to notice when I’m missing information—but that adds more uncertainty; my performance drifts back towards average.
Not at all. I can define a series of metrics—energy consumption and “win” ratio being the most obvious—and define an n-dimensional function on those metrics, and then prove that given bounds in all directions that a maximum exists so long as my function follows certain criteria (mostly continuity.)
I can note that given the space of possible functions and metrics, the chances of my brain being optimal by any of them is extremely low. I can’t really say much about brain-optimality mostly because I don’t understand enough biology to understand how much energy draw is too much, and the like; it’s trivial to show that our brain is not an optimal mind under unbounded resources.
Which, in turn, is really what we care about here—energy is abundant, healthcare is much better than in the ancestral environment, so if it turns out our health takes a hit because of optimizing for intelligence somehow we can afford it.
I don’t think you can guarantee ONE maximum. But in any case, the vastness of the space of all n-dimensional functions makes the argument unpersuasive. Let’s get a bit closer to the common, garden-variety reality and ask a simpler question. In which directions do you think human brain should change/evolve/mutate to become more optimal? And in these directions, is the further the better or there is a point beyond which one should not go?
Um, I have strong doubts about that. Your body affects your mind greatly (not to mention your quality of life).