Bayesian epistemology boils down to: use probabilities to represent your confidence in your beliefs, use Bayes’s theorem to update your confidences—and try to choose a sensible prior.
Yes, http://wiki.lesswrong.com/wiki/Sequences . Specifically the first four ‘sequences’ there. Many people on LW will say “Read the sequences!” as an answer to almost everything, like they’re some kind of Holy Writ, which can be offputting, but in this case Yudkowsky really does answer most of the objections you’ve been raising and is the simplest explanation I know of.
ETA—That’s weird. When I posted this reply I could have sworn that the username on the comment above wasn’t David_Gerard but one I didn’t recognise. I wouldn’t point D_G to the sequences because I know he’s read them, but would point a newbie to them. Apologies for the brainfart,
I’ve read the Sequences, and I don’t know what you are referring to—the first four Sequences do explain Bayes-related ideas and how to apply them in everyday life, but they don’t address all of the criticism that curi and others have pointed out. Did you have a specific post or series of posts in mind?
The criticisms here that haven’t been just noise have mostly boiled down to the question of choosing one hypothesis from an infinite sample space. I’m not sure exactly where in the sequences EY covered this, but I know he did, and did in one of those four, because I reread them recently. Sorry I can’t be more help.
What I’m meaning to point out is the absence of something with a title along the lines of “Bayesian epistemology” that explains Bayesian epistemology specifically.
What LW has is “here’s the theorem” and “everything works by this theorem”, without the second one being explained in coherent detail. I mean, Bayes structure is everywhere. But just noting that is not an explanation.
There’s potential here for an enormously popular front-page post that gets linked all over the net forever, because the SEP article is so awful …
LessWrong is seriously lacking a proper explanation of Bayesian epistemology (as opposed to the theorem itself). Do you have one handy?
http://yudkowsky.net/rational/bayes has a section on Bayesian epistemology that compares it to Popper’s ideas.
Bayesian epistemology boils down to: use probabilities to represent your confidence in your beliefs, use Bayes’s theorem to update your confidences—and try to choose a sensible prior.
Of course I’ve read that.
It first of all is focussed on Bayes’ theorem without a ton of epistemology.
It second of all does not discuss Popper’s ideas but only nasty myths about them. See the original post here:
http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/
Yes, http://wiki.lesswrong.com/wiki/Sequences . Specifically the first four ‘sequences’ there. Many people on LW will say “Read the sequences!” as an answer to almost everything, like they’re some kind of Holy Writ, which can be offputting, but in this case Yudkowsky really does answer most of the objections you’ve been raising and is the simplest explanation I know of.
ETA—That’s weird. When I posted this reply I could have sworn that the username on the comment above wasn’t David_Gerard but one I didn’t recognise. I wouldn’t point D_G to the sequences because I know he’s read them, but would point a newbie to them. Apologies for the brainfart,
I’ve read the Sequences, and I don’t know what you are referring to—the first four Sequences do explain Bayes-related ideas and how to apply them in everyday life, but they don’t address all of the criticism that curi and others have pointed out. Did you have a specific post or series of posts in mind?
The criticisms here that haven’t been just noise have mostly boiled down to the question of choosing one hypothesis from an infinite sample space. I’m not sure exactly where in the sequences EY covered this, but I know he did, and did in one of those four, because I reread them recently. Sorry I can’t be more help.
What I’m meaning to point out is the absence of something with a title along the lines of “Bayesian epistemology” that explains Bayesian epistemology specifically.
What LW has is “here’s the theorem” and “everything works by this theorem”, without the second one being explained in coherent detail. I mean, Bayes structure is everywhere. But just noting that is not an explanation.
There’s potential here for an enormously popular front-page post that gets linked all over the net forever, because the SEP article is so awful …