The complex loving god hypothesis is incredibly complicated. Minds are so complex we can’t even build one yet.
There are two problems with this argument. First, each individual god might be very improbable, but that could be counterbalanced by the astronomical number of possible gods (e.g. consider all possible tweaks to the holy book), so you can argue apriori against specific flavors of theism but not against theism in general. Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity. A powerful mind (or a program that blossoms into one) could even be simpler than physics as we currently know it, which is already quite complex and seems to have even more complexity waiting in store.
IMO a correct argument against theism should focus on the “loving” part rather than the “mind” part, and focus on evidence rather than complexity priors. The observed moral neutrality of physics is more probable if there’s no moral deity. Given what we know about evolution etc., it’s hard to name any true fact that makes a moral deity more likely.
I’m not sure that everything in my comment is correct. But I guess LW could benefit from developing an updated argument against (or for) theism?
Your argument about K-complexity is a decent shorthand but causes people to think that this “simplicity” thing is baked into the universe (universal prior) as if we had direct access to the universe (universal prior, reference machine language) and isn’t just another way of saying it’s more probable after having updated on a ton of evidence. As you said it should be about evidence not priors. No one’s ever seen a prior, at best a brain’s frequentist judgment about what “priors” are good to use when.
Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity.
That may be somewhat misleading. A seed AI, denied access to external information, will be a moron. Yet the more information it takes into memory the higher the K-complexity of the thing, taken as a whole, is.
You might be able to code a relatively simple AI in your garage, but if it’s going to be useful it can’t stay simple.
ETA: Also if you take the computer system as a whole with all of the programming libraries and hardware arrangements—even ‘hello world’ would have high K-complexity. If you’re talking about whatsoever produces a given output on the screen in terms of a probability mass I’m not sure it’s reasonable to separate the two out and deal with K-complexity as simply a manifestation of high level APIs.
For every every program that could be called a mind, there are very very very many that are not.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there’s no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it’s own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is. Assuming a physics god is even a coherent concept.
In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
I bet physics is a lot simpler than it appears right now tho.
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)
Bayesians don’t believe in evidence silly goose, you know that. Anyway, User:cousin_it, you’re essentially right, though I think that LW would benefit less from developing updated arguments and more from reading Aquinas, at least in the counterfactual universe where LW knew how to read. Anyway. In the real world Less Wrong is hopeless. You’re not hopeless. As a decision theorist you’re trying to find God, so you have to believe in him in a sense, right? And if you’re not trying to find God you should probably stay the hell away from FAI projects. Just sayin’.
There are two problems with this argument. First, each individual god might be very improbable, but that could be counterbalanced by the astronomical number of possible gods (e.g. consider all possible tweaks to the holy book), so you can argue apriori against specific flavors of theism but not against theism in general. Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity. A powerful mind (or a program that blossoms into one) could even be simpler than physics as we currently know it, which is already quite complex and seems to have even more complexity waiting in store.
IMO a correct argument against theism should focus on the “loving” part rather than the “mind” part, and focus on evidence rather than complexity priors. The observed moral neutrality of physics is more probable if there’s no moral deity. Given what we know about evolution etc., it’s hard to name any true fact that makes a moral deity more likely.
I’m not sure that everything in my comment is correct. But I guess LW could benefit from developing an updated argument against (or for) theism?
Your argument about K-complexity is a decent shorthand but causes people to think that this “simplicity” thing is baked into the universe (universal prior) as if we had direct access to the universe (universal prior, reference machine language) and isn’t just another way of saying it’s more probable after having updated on a ton of evidence. As you said it should be about evidence not priors. No one’s ever seen a prior, at best a brain’s frequentist judgment about what “priors” are good to use when.
That may be somewhat misleading. A seed AI, denied access to external information, will be a moron. Yet the more information it takes into memory the higher the K-complexity of the thing, taken as a whole, is.
You might be able to code a relatively simple AI in your garage, but if it’s going to be useful it can’t stay simple.
ETA: Also if you take the computer system as a whole with all of the programming libraries and hardware arrangements—even ‘hello world’ would have high K-complexity. If you’re talking about whatsoever produces a given output on the screen in terms of a probability mass I’m not sure it’s reasonable to separate the two out and deal with K-complexity as simply a manifestation of high level APIs.
Relevant LW post.
For every every program that could be called a mind, there are very very very many that are not.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there’s no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it’s own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised;
max(morality)
is simpler thangod(morality)
almost no matter how simple god is. Assuming a physics god is even a coherent concept.In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
What did I miss?
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I take it you don’t think we have a chance of creating a superpowerful AI with our own morality?
We don’t have to be very intelligent to be a threat if we can create something that is.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)
Bayesians don’t believe in evidence silly goose, you know that. Anyway, User:cousin_it, you’re essentially right, though I think that LW would benefit less from developing updated arguments and more from reading Aquinas, at least in the counterfactual universe where LW knew how to read. Anyway. In the real world Less Wrong is hopeless. You’re not hopeless. As a decision theorist you’re trying to find God, so you have to believe in him in a sense, right? And if you’re not trying to find God you should probably stay the hell away from FAI projects. Just sayin’.