For every every program that could be called a mind, there are very very very many that are not.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there’s no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it’s own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is. Assuming a physics god is even a coherent concept.
In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
I bet physics is a lot simpler than it appears right now tho.
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)
For every every program that could be called a mind, there are very very very many that are not.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there’s no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it’s own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised;
max(morality)
is simpler thangod(morality)
almost no matter how simple god is. Assuming a physics god is even a coherent concept.In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
What did I miss?
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I take it you don’t think we have a chance of creating a superpowerful AI with our own morality?
We don’t have to be very intelligent to be a threat if we can create something that is.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)