I don’t feel there is a need for that. You just present these things as tools, not fundamental ideas, also discussing why they are not fundamental and why figuring out fundamental ideas is important. The relevant lesson is along the lines of Fake Utility Functions (the post has “utility function” in it, but it doesn’t seem to need to), applied more broadly to epistemology.
You just present these things as tools, not fundamental ideas, also discussing why they are not fundamental and why figuring out fundamental ideas is important.
Thinking of Bayesianism as fundamental is what made some people (e.g., at least Eliezer and me) think that fundamental ideas exist and are important. (Does that mean we ought to rethink whether fundamental ideas exist and are important?) From Eliezer’s My Bayesian Enlightenment:
The first time I heard of “Bayesianism”, I marked it off as obvious; I didn’t go much further in than Bayes’s rule itself. At that time I still thought of probability theory as a tool rather than a law. I didn’t think there were mathematical laws of intelligence (my best and worst mistake). Like nearly all AGI wannabes, Eliezer2001 thought in terms of techniques, methods, algorithms, building up a toolbox full of cool things he could do; he searched for tools, not understanding. Bayes’s Rule was a really neat tool, applicable in a surprising number of cases.
(Besides, even if your suggestion is feasible, somebody would have to rewrite a great deal of Eliezer’s material to not present Bayesianism as fundamental.)
The ideas of Bayesian credence levels and maximum entropy priors are important epistemic tools that in particular allow you to understand that those kludgy AI tools won’t get you what you want.
(Besides, even if your suggestion is feasible, somebody would have to rewrite a great deal of Eliezer’s material to not present Bayesianism as fundamental.)
(It doesn’t matter for the normative judgment, but I guess that’s why you wrote this in parentheses.)
I don’t think Eliezer misused the idea in the sequences, as Bayesian way of thinking is a very important tool that must be mastered to understand many important arguments. And I guess at this point we are arguing about the sense of “fundamental”.
I don’t feel there is a need for that. You just present these things as tools, not fundamental ideas, also discussing why they are not fundamental and why figuring out fundamental ideas is important. The relevant lesson is along the lines of Fake Utility Functions (the post has “utility function” in it, but it doesn’t seem to need to), applied more broadly to epistemology.
Thinking of Bayesianism as fundamental is what made some people (e.g., at least Eliezer and me) think that fundamental ideas exist and are important. (Does that mean we ought to rethink whether fundamental ideas exist and are important?) From Eliezer’s My Bayesian Enlightenment:
(Besides, even if your suggestion is feasible, somebody would have to rewrite a great deal of Eliezer’s material to not present Bayesianism as fundamental.)
The ideas of Bayesian credence levels and maximum entropy priors are important epistemic tools that in particular allow you to understand that those kludgy AI tools won’t get you what you want.
(It doesn’t matter for the normative judgment, but I guess that’s why you wrote this in parentheses.)
I don’t think Eliezer misused the idea in the sequences, as Bayesian way of thinking is a very important tool that must be mastered to understand many important arguments. And I guess at this point we are arguing about the sense of “fundamental”.