I think that the problem you state in unsolvable. Human brain evolved to solve social problems related to survival, not to be a perfect Bayesian reasoner (Bayesian models have a tendency to explode in computational complexity as the number of parameters increases). Unless you want to design a brain anew I see no way to modify ourselves to become perfect epistemic rationalist, besides a lot of effort. That might be a shortcoming of my imagination, though. There’s also the case that we shouldn’t be perfect rationalists: possibly the cost of adding a further decimal to a probability is much higher than the utility gained because of it, but of course we couldn’t know in advance. Also, sometimes our brain prefers to fool itself so that it is better motivated to something / happier, although Eliezer argued at length against this attitude. So yeah, the landscape of the problem is thorny.
As far as I can tell, P (read sequences) < P (figure this out)
You really meant U(read sequences) < U(figure this out)
I see that the problem in your reasoning is that you’ve already presumed what it entails, what you have missed out on is understanding ourselves. Science and reasoning already tell us that we share neural activity, are a social species thus each of us could be considered to be a cell in a brain. It’s not as much if every cell decides to push the limits of its rationality, rather the whole collective as long as the expected value is positive. But to do that the first cells have to be U(figure this out).
It’s not either perfect or non-perfect, that’s absolute thinking. Rather by inductive reasoning or QM probabilistic thinking, “when should I stop refining this, instead share this?” after enough modification and understanding of neuroscience and evolutionary biology for the important facts in what we are.
Based on not thinking in absolute perfection, it’s not a question of if, but rather what do we do? Because your reasoning cannot be already flawed before thinking about this problem. We already know that we can change behavior and conditioning, look around the world how people join religious groups, but how do we capitalize on this brain mechanism to increase productivity, rationality, and so on?
Before I said, “stop refining it then share it”, that’s all it takes and the entire world will have changed. Regarding that, our brain can fool itself, yeah, I don’t see why there can’t be objective measurement outside of subjective opinion and that it’ll surely be thought of in the investigation process.
I think that the problem you state in unsolvable. Human brain evolved to solve social problems related to survival, not to be a perfect Bayesian reasoner (Bayesian models have a tendency to explode in computational complexity as the number of parameters increases). Unless you want to design a brain anew I see no way to modify ourselves to become perfect epistemic rationalist, besides a lot of effort. That might be a shortcoming of my imagination, though.
There’s also the case that we shouldn’t be perfect rationalists: possibly the cost of adding a further decimal to a probability is much higher than the utility gained because of it, but of course we couldn’t know in advance. Also, sometimes our brain prefers to fool itself so that it is better motivated to something / happier, although Eliezer argued at length against this attitude.
So yeah, the landscape of the problem is thorny.
You really meant U(read sequences) < U(figure this out)
I see that the problem in your reasoning is that you’ve already presumed what it entails, what you have missed out on is understanding ourselves. Science and reasoning already tell us that we share neural activity, are a social species thus each of us could be considered to be a cell in a brain. It’s not as much if every cell decides to push the limits of its rationality, rather the whole collective as long as the expected value is positive. But to do that the first cells have to be U(figure this out).
It’s not either perfect or non-perfect, that’s absolute thinking. Rather by inductive reasoning or QM probabilistic thinking, “when should I stop refining this, instead share this?” after enough modification and understanding of neuroscience and evolutionary biology for the important facts in what we are.
Based on not thinking in absolute perfection, it’s not a question of if, but rather what do we do? Because your reasoning cannot be already flawed before thinking about this problem. We already know that we can change behavior and conditioning, look around the world how people join religious groups, but how do we capitalize on this brain mechanism to increase productivity, rationality, and so on?
Before I said, “stop refining it then share it”, that’s all it takes and the entire world will have changed. Regarding that, our brain can fool itself, yeah, I don’t see why there can’t be objective measurement outside of subjective opinion and that it’ll surely be thought of in the investigation process.