Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is. Assuming a physics god is even a coherent concept.
In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
I bet physics is a lot simpler than it appears right now tho.
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised;
max(morality)
is simpler thangod(morality)
almost no matter how simple god is. Assuming a physics god is even a coherent concept.In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
What did I miss?
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...