Assigning a low prior to theism is an abuse of algorithmic probability theory.
Can you explain this? Because I’ve been operating under the following assumption:
It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s Equations, compared to a computer program that simulates an intelligent emotional mind like Thor.
In order to write a computer program that actually computes (rather than models) Maxwell’s equations you have to write a program that writes out a physical universe, and if you want a program that describes Maxwell’s equations then the interpretation you choose is more a matter of pragmatic decision theory than of algorithmic probability theory, at least in practice. (Bounded agents aren’t exactly committing an error of rationality when they don’t try to act like Homo Economicus; that would be decision theoretically insane.)
But anyway. Specific things in the universe don’t seem to be caused by gods. Indeed, that’d be hella unparsimonious: “God chose to add some ridiculous number of bits into His program just to make it such that there was a ‘Messiah gets crucified’ attractor?”. The local universe as a whole, on the other hand, is this whole other thing: there’s the simulation argument.
Your comment got voted up to +10 despite Eliezer’s argument being a straightforward error of algorithmic probability; I don’t know what to do about that and it stresses me out. Does anyone have ideas? It saddens me to see algorithmic probability so regularly abused on LW, but the few corrective posts on the matter, e.g. by Slepnev, don’t seem to have permeated the LW memeplex, probably because they’re too technical.
I think you are slightly misinterpreting things. As you pointed out, the established memeplex does lean heavily in favor of Eliezer’s position on algorithmic probability theory rather than Slepnev’s. But that doesn’t mean that all of the upvoters agree with Eliezer’s position—some of them probably just want to see you answer my question “Can you explain this?”. In fact, I would very much like to see this question answered thoroughly in a way that makes sense to me. Vladimir’s posts are a great start, but lacking knowledge of algorithmic probability theory, I don’t really know how to put all of it together.
What we really need is a well-written gentle introduction to algorithmic probability theory that carefully and clearly shows how it works and what it does and doesn’t imply.
Can you explain this? Because I’ve been operating under the following assumption:
In order to write a computer program that actually computes (rather than models) Maxwell’s equations you have to write a program that writes out a physical universe, and if you want a program that describes Maxwell’s equations then the interpretation you choose is more a matter of pragmatic decision theory than of algorithmic probability theory, at least in practice. (Bounded agents aren’t exactly committing an error of rationality when they don’t try to act like Homo Economicus; that would be decision theoretically insane.)
But anyway. Specific things in the universe don’t seem to be caused by gods. Indeed, that’d be hella unparsimonious: “God chose to add some ridiculous number of bits into His program just to make it such that there was a ‘Messiah gets crucified’ attractor?”. The local universe as a whole, on the other hand, is this whole other thing: there’s the simulation argument.
Your comment got voted up to +10 despite Eliezer’s argument being a straightforward error of algorithmic probability; I don’t know what to do about that and it stresses me out. Does anyone have ideas? It saddens me to see algorithmic probability so regularly abused on LW, but the few corrective posts on the matter, e.g. by Slepnev, don’t seem to have permeated the LW memeplex, probably because they’re too technical.
I think you are slightly misinterpreting things. As you pointed out, the established memeplex does lean heavily in favor of Eliezer’s position on algorithmic probability theory rather than Slepnev’s. But that doesn’t mean that all of the upvoters agree with Eliezer’s position—some of them probably just want to see you answer my question “Can you explain this?”. In fact, I would very much like to see this question answered thoroughly in a way that makes sense to me. Vladimir’s posts are a great start, but lacking knowledge of algorithmic probability theory, I don’t really know how to put all of it together.
Thanks for the correction, that people are interested in it at least is a good sign.
What we really need is a well-written gentle introduction to algorithmic probability theory that carefully and clearly shows how it works and what it does and doesn’t imply.