Hm, that seems to be more in the context of “patching over” ideas that are mostly right but have some problems. I’m talking about “fixing” theories that are exactly right but impossible to apply.
One of the more interesting experiences I’ve had learning about physics is how much of our understanding of physics is a massive oversimplification, because it’s just too hard to calculate the optimal answer. Most nobel prize winning work comes not from new laws of physics, but from figuring out how to approximate those laws in a way that is complicated enough to be useful but just simple enough to be solvable. And so with rationality in this case, I think. The high-importance rationality work is not about new laws of rationality or strange but easy stuff, but about approximations of rationality that are complicated enough to be useful but simple enough to be solvable.
You mean something along the lines of what I have written here?
Hm, that seems to be more in the context of “patching over” ideas that are mostly right but have some problems. I’m talking about “fixing” theories that are exactly right but impossible to apply.
One of the more interesting experiences I’ve had learning about physics is how much of our understanding of physics is a massive oversimplification, because it’s just too hard to calculate the optimal answer. Most nobel prize winning work comes not from new laws of physics, but from figuring out how to approximate those laws in a way that is complicated enough to be useful but just simple enough to be solvable. And so with rationality in this case, I think. The high-importance rationality work is not about new laws of rationality or strange but easy stuff, but about approximations of rationality that are complicated enough to be useful but simple enough to be solvable.