Consider a proposition P. It is either true or false. The green line represents us believing with 100% confidence that P is true. On the other hand, the red line represents us believing with 100% confidence that P is false.
We start off not knowing anything about P, so we start off at point 0, right at that black line in the middle. Then, we observe data point A. A points towards P being true, so we move upwards towards the green line a moderate amount, and end up at point 1. After that we observe data point B. B is weak evidence against P. We move slightly further from the green line, but still above the black line, and end up at point 2. So on and so forth, until all of the data relevant to P has been observed, and since we are perfect Bayesians, we end up being 100% confident that P is, in fact true.
Now, compare someone at point 3 to someone at point 4. The person at point 3 is closer to the truth, but the person at point 4 is further along.
This is an interesting phenomena to me. The idea of being further along, but also further from the truth. I’m not sure exactly where to take this idea, but two thoughts come to mind.
The first thought is of valleys of bad rationality. As we make incremental progress, it doesn’t always make us better off.
The second thought is of how far along I actually am in my beliefs. For example, I am an athiest. But what if I had to debate the smartest theist in the world. Would I win that debate? I think I would, but I’m not actually sure. Perhaps they are further along than me. Perhaps I’m at point 3 and they’re at point 7.
I believe that similar to conservation of expected evidence, there’s a rule of rationality saying that you shouldn’t expect your beliefs to change back and forth too much, because that means there’s a lot of uncertainty about the factual matters, and the uncertainty should bring you closer to max entropy. Can’t remember the specific formula, though.
Good point. I was actually thinking about that and forgot to mention it.
I’m not sure how to articulate this well, but my diagram and OP was mainly targeted at gears level modesl. Using the athiesm example, the worlds smartest theist might have a gears level model that is further along than mine. However, I expect that the worlds smartest atheist has a gears level model that is further along than the worlds smartest theist.
Closer to the truth vs further along
Consider a proposition P. It is either true or false. The green line represents us believing with 100% confidence that P is true. On the other hand, the red line represents us believing with 100% confidence that P is false.
We start off not knowing anything about P, so we start off at point 0, right at that black line in the middle. Then, we observe data point A. A points towards P being true, so we move upwards towards the green line a moderate amount, and end up at point 1. After that we observe data point B. B is weak evidence against P. We move slightly further from the green line, but still above the black line, and end up at point 2. So on and so forth, until all of the data relevant to P has been observed, and since we are perfect Bayesians, we end up being 100% confident that P is, in fact true.
Now, compare someone at point 3 to someone at point 4. The person at point 3 is closer to the truth, but the person at point 4 is further along.
This is an interesting phenomena to me. The idea of being further along, but also further from the truth. I’m not sure exactly where to take this idea, but two thoughts come to mind.
The first thought is of valleys of bad rationality. As we make incremental progress, it doesn’t always make us better off.
The second thought is of how far along I actually am in my beliefs. For example, I am an athiest. But what if I had to debate the smartest theist in the world. Would I win that debate? I think I would, but I’m not actually sure. Perhaps they are further along than me. Perhaps I’m at point 3 and they’re at point 7.
I believe that similar to conservation of expected evidence, there’s a rule of rationality saying that you shouldn’t expect your beliefs to change back and forth too much, because that means there’s a lot of uncertainty about the factual matters, and the uncertainty should bring you closer to max entropy. Can’t remember the specific formula, though.
Good point. I was actually thinking about that and forgot to mention it.
I’m not sure how to articulate this well, but my diagram and OP was mainly targeted at gears level modesl. Using the athiesm example, the worlds smartest theist might have a gears level model that is further along than mine. However, I expect that the worlds smartest atheist has a gears level model that is further along than the worlds smartest theist.