I haven’t done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.
“Unifying epistemic and instrumental reality” doesn’t seem desirable to me — winning and world-mapping are different things. We have to choose between them sometimes, which is messy, but such is the nature of caring about more than one thing in life.
World-mapping is also a different thing from prediction-making, though they’re obviously related in that making your brain resemble the world can make your brain better at predicting future states of the world — just fast forward your ‘map’ and see what it says.
The two can come apart, e.g., if your map is wrong but coincidentally gives you the right answer in some particular case — like a clock that’s broken and always says it’s 10am, but you happen to check it at 10am. Then you’re making an accurate prediction on the basis of something other than having an accurate map underlying that prediction. But this isn’t the sort of thing to shoot for, or try to engineer; merely accurate predictiveness is a diminished version of world-mapping.
All of this is stuff that (in some sense) we know by experience, sure. But the most fundamental and general theory we use to make sense of truth/accuracy/reasoning needn’t be the earliest theory we can epistemically justify, or the most defensible one in the face of Cartesian doubts.
Earliness, foundationalness, and immunity-to-unrealistically-extreme-hypothetical-skepticism are all different things, and in practice the best way to end up with accurate and useful foundations (in my experience) is to ‘build them as you go’ and refine them based on all sorts of contingent and empirical beliefs we acquire, rather than to impose artificial earliness or un-contingent-ness constraints.
“Unifying epistemic and instrumental reality” doesn’t seem desirable to me — winning and world-mapping are different things. We have to choose between them sometimes, which is messy, but such is the nature of caring about more than one thing in life.
World-mapping is also a different thing from prediction-making, though they’re obviously related in that making your brain resemble the world can make your brain better at predicting future states of the world — just fast forward your ‘map’ and see what it says.
The two can come apart, e.g., if your map is wrong but coincidentally gives you the right answer in some particular case — like a clock that’s broken and always says it’s 10am, but you happen to check it at 10am. Then you’re making an accurate prediction on the basis of something other than having an accurate map underlying that prediction. But this isn’t the sort of thing to shoot for, or try to engineer; merely accurate predictiveness is a diminished version of world-mapping.
All of this is stuff that (in some sense) we know by experience, sure. But the most fundamental and general theory we use to make sense of truth/accuracy/reasoning needn’t be the earliest theory we can epistemically justify, or the most defensible one in the face of Cartesian doubts.
Earliness, foundationalness, and immunity-to-unrealistically-extreme-hypothetical-skepticism are all different things, and in practice the best way to end up with accurate and useful foundations (in my experience) is to ‘build them as you go’ and refine them based on all sorts of contingent and empirical beliefs we acquire, rather than to impose artificial earliness or un-contingent-ness constraints.