Are there any exercises similar to calibration questions where people are 1) asked a question and 2) given some info relevant for the answer, and then required to state how the info influenced the changes in the probabilities they state? I mean, if a brain ‘does something similar to a Bayesian calculation’, then it should be measurable, and maybe trainable even in ‘vaguely stated’ word-only problems. And if it is easier to do in some domains, it would be interesting why.
fermi estimates and generating inside view models before constructing an outside view one to compare the results both are kind of in this direction I think.
Are there any exercises similar to calibration questions where people are 1) asked a question and 2) given some info relevant for the answer, and then required to state how the info influenced the changes in the probabilities they state? I mean, if a brain ‘does something similar to a Bayesian calculation’, then it should be measurable, and maybe trainable even in ‘vaguely stated’ word-only problems. And if it is easier to do in some domains, it would be interesting why.
fermi estimates and generating inside view models before constructing an outside view one to compare the results both are kind of in this direction I think.