I think the word you are looking for is analysis.
Consider the toy scenario:
You observe two pieces of evidence:
A = B
B = C
Now, without gathering any additional evidence, you can figure out (given certain assumptions about the gears level working of A, B, and C) that A = C. Because that takes finite time for your brain to realize, it feels like a new piece of information. However, it is merely the result of analyzing the existing evidence to generate additional equivalent statements.
Of course, those new ways of describing the territory can be useful, but they shouldn’t result in Baysean updates. Just like getting redundant evidence (eg 1. A = B 2. B = A) shouldn’t move your estimate further than just getting one bit of evidence.
I see what you mean. However, I don’t see how that would fit in a sentence like “The theoretical evidence made me update slightly towards X.”
However, it is merely the result of analyzing the existing evidence to generate additional equivalent statements. Of course, those new ways of describing the territory can be useful, but they shouldn’t result in Baysean updates.
Ah, but your brain is not a Bayes net! If it were a Bayes net your beliefs would always be in perfect synchrony with the data you’ve observed over time. Every time you observe a new piece of data, the information gets propagated and all of your beliefs get updated accordingly. The only way to update a belief would be to observe a new piece of data.
However, our brains are far from perfect at doing this. For example, I recently realized that the value side of the expected value equation of voting is crazy large. Ie. the probability side of the equation is the chances of your vote being decisive (well, for argument’s sake) and the value side is how valuable it is for your vote to be decisive. At $100/citizen and 300M citizens, that’s $30B in value. Probably much more IMO. So then, in a lot of states the EV of voting is pretty large.
This realization of mine didn’t come from any new data, per se. I already knew that there were roughly 300M people in the US and that the impact of my candidate being elected is somewhere in the ballpark of $100/citizen. I just hadn’t… “connected the dots” until recently. If my brain were a perfect Bayes net the dots would get connected immediately every time I observe a new piece of data, but in reality there are a huge amount of “unconnected dots”.
(What an interesting phenomena, having a lot of “unconnected dots” in your head. That makes it sound like a fun playground to explore.
And it’s interesting that there is a lot of intellectual work you can do without “going out into the world”. Not that you shouldn’t “go out into the world”, just that there is a lot you can do without it. I think I recall hearing that the ancient Greek philosophers thought that it was low-status to “go out into the world”. That was the job for lower class people. High class philosophers were supposed to sit in a chair and think.)
I think the word you are looking for is analysis. Consider the toy scenario: You observe two pieces of evidence:
A = B
B = C
Now, without gathering any additional evidence, you can figure out (given certain assumptions about the gears level working of A, B, and C) that A = C. Because that takes finite time for your brain to realize, it feels like a new piece of information. However, it is merely the result of analyzing the existing evidence to generate additional equivalent statements. Of course, those new ways of describing the territory can be useful, but they shouldn’t result in Baysean updates. Just like getting redundant evidence (eg 1. A = B 2. B = A) shouldn’t move your estimate further than just getting one bit of evidence.
I see what you mean. However, I don’t see how that would fit in a sentence like “The theoretical evidence made me update slightly towards X.”
Ah, but your brain is not a Bayes net! If it were a Bayes net your beliefs would always be in perfect synchrony with the data you’ve observed over time. Every time you observe a new piece of data, the information gets propagated and all of your beliefs get updated accordingly. The only way to update a belief would be to observe a new piece of data.
However, our brains are far from perfect at doing this. For example, I recently realized that the value side of the expected value equation of voting is crazy large. Ie. the probability side of the equation is the chances of your vote being decisive (well, for argument’s sake) and the value side is how valuable it is for your vote to be decisive. At $100/citizen and 300M citizens, that’s $30B in value. Probably much more IMO. So then, in a lot of states the EV of voting is pretty large.
This realization of mine didn’t come from any new data, per se. I already knew that there were roughly 300M people in the US and that the impact of my candidate being elected is somewhere in the ballpark of $100/citizen. I just hadn’t… “connected the dots” until recently. If my brain were a perfect Bayes net the dots would get connected immediately every time I observe a new piece of data, but in reality there are a huge amount of “unconnected dots”.
(What an interesting phenomena, having a lot of “unconnected dots” in your head. That makes it sound like a fun playground to explore.
And it’s interesting that there is a lot of intellectual work you can do without “going out into the world”. Not that you shouldn’t “go out into the world”, just that there is a lot you can do without it. I think I recall hearing that the ancient Greek philosophers thought that it was low-status to “go out into the world”. That was the job for lower class people. High class philosophers were supposed to sit in a chair and think.)