His description of LW there is: “LW suggests (sometimes, not always) that Bayesian probability is the main tool for effective, accurate thinking. I think it is only a small part of what you need.”
This seems to reflect the toolbox vs. law misunderstanding that Eliezer describes in the OP. Chapman is using a toolbox frame and presuming that, when LWers go on about Bayes, they are using a similar frame and thinking that it’s the “main tool” in the toolbox.
In the rest of the post it looks like Chapman thinks that what he’s saying is contrary to the LW ethos, but it seems to me like his ideas would fit in fine here. For example, Scott has also discussed how a robot can use simple rules which outsource much of its cognition to the environment instead of constructing an internal representation and applying Bayes & expected utility maximization.
His description of LW there is: “LW suggests (sometimes, not always) that Bayesian probability is the main tool for effective, accurate thinking. I think it is only a small part of what you need.”
This seems to reflect the toolbox vs. law misunderstanding that Eliezer describes in the OP. Chapman is using a toolbox frame and presuming that, when LWers go on about Bayes, they are using a similar frame and thinking that it’s the “main tool” in the toolbox.
In the rest of the post it looks like Chapman thinks that what he’s saying is contrary to the LW ethos, but it seems to me like his ideas would fit in fine here. For example, Scott has also discussed how a robot can use simple rules which outsource much of its cognition to the environment instead of constructing an internal representation and applying Bayes & expected utility maximization.