You’re definitely paying for epistemic rationality with instrumental power if you spend all of your time contemplating metaethics so that you have a pure epistemic notion of what your goals are.
Humans start with effectively zero rationality. At some point, it becomes less winning to spend time gaining epistemic rationality than to spend time effecting your goals.
So, it seems like you can spend potential epistemtic rationality for instrumental power by using time to effect change rather than becoming epistemically pure.
To respond to some of your later points:
Take a programming language like OCaml. OCaml supports mutable and immutable state, and you could write an analysis over OCaml that would indeed barf and die on the first instances of mutation. Mutable state does make it incredibly difficult, sometimes to the point of impossiblity, to use conventional analyses on modern computers to prove facts about programs.
But this doesn’t mean that a single ref cell destroys the benefits of purity in an OCaml program. To a human reading the program (which is really the most important use case), a single ref cell can be the best way to solve a problem, and the language’s modularity can easily abstract it away so that the interface on the whole is pure.
Similarily, I agree that it’s important to strive for perfection, in all cases. But striving for perfection doesn’t mean taking every available sacrifice for perfection. I can strive for epistemic perfection while still choosing locally to not improve my epistemic state. An AI might have a strict total ordering of terminal goals, but a human never will. So as a human, I can simultaneously strive for epistemic perfection and instrumental usefulness.
In any case, I still think there’s a limit where the return on investment into epistemic rationality diminishes into nothingness, and I think that limit is much closer than most less wrongers think, primarily because what matters most isn’t absolute rationality, but relative rationality in your particular social setting. You only need to be more able to win than everyone you compete with; becoming more able to win without actually winning is not only a waste of time, but actively harmful. It’s better to win two battles than to waste time overpreparing. Overfocusing on epistemic rationality ignores the opportunity cost of neglecting to use your arts for something outside themselves.
You’re definitely paying for epistemic rationality with instrumental power if you spend all of your time contemplating metaethics so that you have a pure epistemic notion of what your goals are.
Humans start with effectively zero rationality. At some point, it becomes less winning to spend time gaining epistemic rationality than to spend time effecting your goals.
So, it seems like you can spend potential epistemtic rationality for instrumental power by using time to effect change rather than becoming epistemically pure.
To respond to some of your later points:
Take a programming language like OCaml. OCaml supports mutable and immutable state, and you could write an analysis over OCaml that would indeed barf and die on the first instances of mutation. Mutable state does make it incredibly difficult, sometimes to the point of impossiblity, to use conventional analyses on modern computers to prove facts about programs.
But this doesn’t mean that a single ref cell destroys the benefits of purity in an OCaml program. To a human reading the program (which is really the most important use case), a single ref cell can be the best way to solve a problem, and the language’s modularity can easily abstract it away so that the interface on the whole is pure.
Similarily, I agree that it’s important to strive for perfection, in all cases. But striving for perfection doesn’t mean taking every available sacrifice for perfection. I can strive for epistemic perfection while still choosing locally to not improve my epistemic state. An AI might have a strict total ordering of terminal goals, but a human never will. So as a human, I can simultaneously strive for epistemic perfection and instrumental usefulness.
In any case, I still think there’s a limit where the return on investment into epistemic rationality diminishes into nothingness, and I think that limit is much closer than most less wrongers think, primarily because what matters most isn’t absolute rationality, but relative rationality in your particular social setting. You only need to be more able to win than everyone you compete with; becoming more able to win without actually winning is not only a waste of time, but actively harmful. It’s better to win two battles than to waste time overpreparing. Overfocusing on epistemic rationality ignores the opportunity cost of neglecting to use your arts for something outside themselves.