What does Solomonoff Induction actually say about, for example, whether we live in a creatorless universe that runs on physics?
The entirely depends on your definition of creator. Traditional creators such as the Christian god could potentially have enough explanatory power once properly defined, yet would end up horrendously complex (encoding an entire human-like mind into the primitive natural law) unless you further reduce the construction of the creator to a natural process.
Or the Simulation Argument?
Solomonoff Induction is empirical: unless the laws of our universe are broken by the simulators, Solomonoff Induction says nothing about whether we are in a simulation. If the simulators have not influenced our universe but will in the future, it depends entirely on whether the simulator universe is simpler than ours. If the simulators have already influenced our universe it’s more complicated.
People are perfectly fine with fuzzy approximate explanations of phenomena, like Maxwell’s equations &c. “Goddidit” is not that different. Trying to get a full causal explanation would mean finding bits of Omega. In the end, decision theory is fundamental, and epistemological abstractions like SI are cool but ultimately irrelevant. This whole “encoding a human-like mind” thing doesn’t work like you think it does—you can interpret SI that way and see some cool implications, just remember it’s a useless toy model. …Just sayin’.
Physics theories import low-complexity mathematical models. “Goddidit” imports complicated human notions of agency. Approximate explanations are fine if we can reason that their implicit complexity is low relative to their explanatory power (a relatively easily satisfied metric, after which competition between theories becomes the key factor).
In Solomonoff Induction, theories that don’t explain data must contain that data raw.
Physics theories import low-complexity mathematical models. “Goddidit” imports complicated human notions of agency.
Frankly, I think this idea is attractive but ultimately an error. It is indeed true that to an analytical mind with an interest in physics, mathematics feels a lot less complex, in some sense, than intuitive notions of agency. But no matter how much physics or psychology you know, you don’t have introspective access to the universal prior—maybe the prior privileges math over psychology, or maybe it doesn’t. All we have is our evidence, often in the form of conclusions drawn from intuitive analyses of what hypotheses have or haven’t tended to bear intellectual or instrumental fruit in the past—id est, we’re awfully close to talkin’ ’bout pragmatics and decision theory here. And yes, mathematical explanations have been surprisingly effective. But if you look at human history, hypotheses that make use of “complicated human notions of agency” have also been pretty damn effective. It’s not obvious what notion of complexity would massively privilege the former over the latter, and at any rate, we have no way of knowing, because you can’t find the universal prior in your backyard.
It is indeed true that to an analytical mind with an interest in physics, mathematics feels a lot less complex,
We have objective verification of the low complexity of formalized mathematical theories because we can look at the length of their formal description in say, first-order logic.
But no matter how much physics or psychology you know, you don’t have introspective access to the universal prior—maybe the prior privileges math over psychology, or maybe it doesn’t.
Are you really suggesting some model of computation based on human ideas might work better than say, lambda calculus for computing Kolmogorov complexity for Solomonoff Induction? I’m not sure how to argue with that but I would appreciate it if you would state it explicitly.
We have objective verification of the low complexity of formalized mathematical theories because we can look at the length of their formal description in say, first-order logic.
Right, and that’ll be important if we ever run into aliens that for some reason can’t wrap their brains around English, but instead can figure out our category theory notation and so on. Or if we’re trying to build an FAI, or collaborate with the aforementioned aliens to build FAI.
I’m not sure how to argue with that but I would appreciate it if you would state it explicitly.
Apologies, inferential distance, and there’s a few meta-level points that I think are important to communicate. But there’s inferential distance on the meta level too.
Also keep in mind that algorithmic information/probability theory is actually quite hard to interpret correctly—the basic, intuitive way to read meaning into the math is not quite the way to do it. cousin_it has a post or two correcting some intuitive errors of interpretation.
Alas, none of those are the relevant ones I think. I’m actually rather busy visiting home, so I can only justify certain comments to myself, but I hope someone provides the links.
For what it’s worth, I’m a little skeptical of lukeprog’s understanding of SI—no offense to him meant, it’s just I so happen to believe he made a rather big error when interpreting the math. On the other hand, cousin_it seems to be really on the ball here. Those are just my impressions; I’m a pretend philosopher, not a compsci dude. At any rate I think it’d be just dandy for cousin_it to check Luke’s posts and share his impression or critiques.
we can adopt the general rule that mentioning K-complexity in a discussion of physics is always a sign of confusion :-)
Mentioning it anywhere except algorithmic information theory is a sign of confusion. This includes theology and parapsychology. Use just Bayes or, if you want to be all fancy, updateless-like decision theories. I love algorithmic probability to death but it’s just not something you should use casually. Too many pitfalls.
What experience do you expect if “Goddidit”, as opposed to if “Goddidntdoit”?
(Skeletons of angels versus skeletons of dinosaurs? People with supernatural powers versus people working with superstition? Benevolent universe versus indifferent universe?)
If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction.
—Twelve Virtues of Rationality
It’s just, I’m having an amazing time back home, and my time is limited. I don’t know your goals, but you might want to try harder to signal that you’re really curious and not just asking questions that you think are rhetorical. When you reference common knowledge ’round these parts, like Eliezer’s posts, you should expect that the other person is already aware of that knowledge, and that they have real, substantive reasons to think that what they said is not entirely refuted by the contents of said common knowledge.
Of course, asking rhetorical questions is a perfectly decent way to make an argument. It’s just that arguments in that sense aren’t quite what’s called for in situations like these, I think. But that might just be a difference in our epistemic styles, especially if you’re Slavic. (Gasp, racism! ;P )
When you reference common knowledge ’round these parts, like Eliezer’s posts, you should expect that the other person is already aware of that knowledge
Good point.
Also good point about time being limited, so...
If you’d someday later feel like writing a LW article about similarities between “Goddidit” and Maxwell’s equations, or something like that, I will read it.
The entirely depends on your definition of creator. Traditional creators such as the Christian god could potentially have enough explanatory power once properly defined, yet would end up horrendously complex (encoding an entire human-like mind into the primitive natural law) unless you further reduce the construction of the creator to a natural process.
Solomonoff Induction is empirical: unless the laws of our universe are broken by the simulators, Solomonoff Induction says nothing about whether we are in a simulation. If the simulators have not influenced our universe but will in the future, it depends entirely on whether the simulator universe is simpler than ours. If the simulators have already influenced our universe it’s more complicated.
People are perfectly fine with fuzzy approximate explanations of phenomena, like Maxwell’s equations &c. “Goddidit” is not that different. Trying to get a full causal explanation would mean finding bits of Omega. In the end, decision theory is fundamental, and epistemological abstractions like SI are cool but ultimately irrelevant. This whole “encoding a human-like mind” thing doesn’t work like you think it does—you can interpret SI that way and see some cool implications, just remember it’s a useless toy model. …Just sayin’.
Physics theories import low-complexity mathematical models. “Goddidit” imports complicated human notions of agency. Approximate explanations are fine if we can reason that their implicit complexity is low relative to their explanatory power (a relatively easily satisfied metric, after which competition between theories becomes the key factor).
In Solomonoff Induction, theories that don’t explain data must contain that data raw.
Frankly, I think this idea is attractive but ultimately an error. It is indeed true that to an analytical mind with an interest in physics, mathematics feels a lot less complex, in some sense, than intuitive notions of agency. But no matter how much physics or psychology you know, you don’t have introspective access to the universal prior—maybe the prior privileges math over psychology, or maybe it doesn’t. All we have is our evidence, often in the form of conclusions drawn from intuitive analyses of what hypotheses have or haven’t tended to bear intellectual or instrumental fruit in the past—id est, we’re awfully close to talkin’ ’bout pragmatics and decision theory here. And yes, mathematical explanations have been surprisingly effective. But if you look at human history, hypotheses that make use of “complicated human notions of agency” have also been pretty damn effective. It’s not obvious what notion of complexity would massively privilege the former over the latter, and at any rate, we have no way of knowing, because you can’t find the universal prior in your backyard.
We have objective verification of the low complexity of formalized mathematical theories because we can look at the length of their formal description in say, first-order logic.
Are you really suggesting some model of computation based on human ideas might work better than say, lambda calculus for computing Kolmogorov complexity for Solomonoff Induction? I’m not sure how to argue with that but I would appreciate it if you would state it explicitly.
Right, and that’ll be important if we ever run into aliens that for some reason can’t wrap their brains around English, but instead can figure out our category theory notation and so on. Or if we’re trying to build an FAI, or collaborate with the aforementioned aliens to build FAI.
Apologies, inferential distance, and there’s a few meta-level points that I think are important to communicate. But there’s inferential distance on the meta level too.
Also keep in mind that algorithmic information/probability theory is actually quite hard to interpret correctly—the basic, intuitive way to read meaning into the math is not quite the way to do it. cousin_it has a post or two correcting some intuitive errors of interpretation.
I found these:
Intuitive Explanation of Solomonoff Induction—lukeprog
Does Solomonoff always win? - cousin_it
K-complexity of everyday things—cousin_it
Solomonoff Induction, by Shane Legg—cousin_it
I would appreciate it if people could link me to more.
Alas, none of those are the relevant ones I think. I’m actually rather busy visiting home, so I can only justify certain comments to myself, but I hope someone provides the links.
For what it’s worth, I’m a little skeptical of lukeprog’s understanding of SI—no offense to him meant, it’s just I so happen to believe he made a rather big error when interpreting the math. On the other hand, cousin_it seems to be really on the ball here. Those are just my impressions; I’m a pretend philosopher, not a compsci dude. At any rate I think it’d be just dandy for cousin_it to check Luke’s posts and share his impression or critiques.
Here’s one I was thinking of:
The prior of a hypothesis does not depend on its complexity—cousin_it
(If I recall, Nesov’s comment clearly demonstrates the important point.)
That post seems to mix together the concept of a prior with the concept of experience.
http://lesswrong.com/lw/328/description_complexity_an_apology_and_note_on/
Mentioning it anywhere except algorithmic information theory is a sign of confusion. This includes theology and parapsychology. Use just Bayes or, if you want to be all fancy, updateless-like decision theories. I love algorithmic probability to death but it’s just not something you should use casually. Too many pitfalls.
Bayes requires a prior.
No one should ever need to discuss “priors”. Focus on the likelihood ratio.
...but that’s like comparing apples and cheese!
Approximate explanations have some predictive power.
What experience do you expect if “Goddidit”, as opposed to if “Goddidntdoit”?
(Skeletons of angels versus skeletons of dinosaurs? People with supernatural powers versus people working with superstition? Benevolent universe versus indifferent universe?)
—Twelve Virtues of Rationality
It’s just, I’m having an amazing time back home, and my time is limited. I don’t know your goals, but you might want to try harder to signal that you’re really curious and not just asking questions that you think are rhetorical. When you reference common knowledge ’round these parts, like Eliezer’s posts, you should expect that the other person is already aware of that knowledge, and that they have real, substantive reasons to think that what they said is not entirely refuted by the contents of said common knowledge.
Of course, asking rhetorical questions is a perfectly decent way to make an argument. It’s just that arguments in that sense aren’t quite what’s called for in situations like these, I think. But that might just be a difference in our epistemic styles, especially if you’re Slavic. (Gasp, racism! ;P )
Good point.
Also good point about time being limited, so...
If you’d someday later feel like writing a LW article about similarities between “Goddidit” and Maxwell’s equations, or something like that, I will read it.