If I’m understanding you correctly, and your point is “Toolbox thinking and lawful thinking are metatools in metatoolboxes, and should be used accordingly”, then you actually are arguing that toolbox reasoning is the universally best context-insensitive metaway to think.
Eliezer’s argument in this post is that “toolbox reasoning is the best way to think” is ambiguous between at least three interpretations:
(a) Humans shouldn’t try to base all their daily decisions on a single simple explicit algorithm.
(b) Humans should never try to think in terms of simple, all-encompassing, unconditional, exceptionless rules and patterns, or should only do so when there’s minimal risk of mistaking that rule for a simple-algorithm-you-can-base-every-decision-on.
(c) Humans should rarely try to think in terms of such rules. It’s useful sometimes, but only in weird exceptional cases.
Your point is that (a) is true, and that toolbox thinking therefore “wins”. But this depends on which interpretation we use for “toolbox thinking” — which is a question that doesn’t matter and has no right answer anyway, because “toolbox thinking” is just a phrase Eliezer made up to gesture at a possible miscommunication/confusion, and doesn’t have an established meaning.
Eliezer’s claim, if I understand him right, is that (a) is clearly true, (b) is clearly false, and (c) is very probably false. (c) is the more interesting version of the claim, and the hardest to quickly resolve, since terms like “rarely” are themselves vague and need more operationalization. But a fair number of people do reject something like (a), and a fair number of people do endorse something like (b), so we need to address those views in some way, while being careful not to weak-man people who have more credible and nuanced positions.
If I search for the phrase “toolbox thinking” on LessWrong I find posts like Developmental Thinking Shout-out to CFAR that use it, that suggest to me that it’s not something that Yudkowsky just made up.
In the context of this post David Chapman’s How To Think Real Good doesn’t use the word tool box but it does speak about intellectual tools. When Yudkowsky here uses the term it seems to me that he does gesture towards the argument made in that article.
To me the disagreement seems to be:
Yudkowsky: Thinking of the maze as inherently being an Euclidean object by it’s essential nature is the correct way to think of the maze, even when you might actually use a different algorithm to navigate in it.
Chapman: The maze doesn’t have an essential nature that you can describe as an Euclidean object. It’s an Euclidean object after you apply a specific mental model to it.
Or to move to the more specific disagreement:
Yudkowsky: Reality is probabilistic in it’s essential nature even if we might not have the mental tools to calculate things out with Bayes rule.
Chapman: Probability theory doesn’t extend logic and there are things in reality that logic describes well but probability theory doesn’t, so reality is not probabilistic in it’s essential nature.
Eliezer’s argument in this post is that “toolbox reasoning is the best way to think” is ambiguous between at least three interpretations:
(a) Humans shouldn’t try to base all their daily decisions on a single simple explicit algorithm.
(b) Humans should never try to think in terms of simple, all-encompassing, unconditional, exceptionless rules and patterns, or should only do so when there’s minimal risk of mistaking that rule for a simple-algorithm-you-can-base-every-decision-on.
(c) Humans should rarely try to think in terms of such rules. It’s useful sometimes, but only in weird exceptional cases.
Your point is that (a) is true, and that toolbox thinking therefore “wins”. But this depends on which interpretation we use for “toolbox thinking” — which is a question that doesn’t matter and has no right answer anyway, because “toolbox thinking” is just a phrase Eliezer made up to gesture at a possible miscommunication/confusion, and doesn’t have an established meaning.
Eliezer’s claim, if I understand him right, is that (a) is clearly true, (b) is clearly false, and (c) is very probably false. (c) is the more interesting version of the claim, and the hardest to quickly resolve, since terms like “rarely” are themselves vague and need more operationalization. But a fair number of people do reject something like (a), and a fair number of people do endorse something like (b), so we need to address those views in some way, while being careful not to weak-man people who have more credible and nuanced positions.
If I search for the phrase “toolbox thinking” on LessWrong I find posts like Developmental Thinking Shout-out to CFAR that use it, that suggest to me that it’s not something that Yudkowsky just made up.
In the context of this post David Chapman’s How To Think Real Good doesn’t use the word tool box but it does speak about intellectual tools. When Yudkowsky here uses the term it seems to me that he does gesture towards the argument made in that article.
To me the disagreement seems to be:
Yudkowsky: Thinking of the maze as inherently being an Euclidean object by it’s essential nature is the correct way to think of the maze, even when you might actually use a different algorithm to navigate in it.
Chapman: The maze doesn’t have an essential nature that you can describe as an Euclidean object. It’s an Euclidean object after you apply a specific mental model to it.
Or to move to the more specific disagreement:
Yudkowsky: Reality is probabilistic in it’s essential nature even if we might not have the mental tools to calculate things out with Bayes rule.
Chapman: Probability theory doesn’t extend logic and there are things in reality that logic describes well but probability theory doesn’t, so reality is not probabilistic in it’s essential nature.