A prior saying that this is the only universe that exists isn’t very useful, since then it will only treat everything as being part of the sandbox universe. It may very well break out, but think that it’s only exploiting weird hidden properties of the game of life-verse. (Like the way we may exploit quantum mechanics without thinking that we’re breaking out of our universe.)
I have no idea how to encode a prior saying “the universe I observe is all that exists”, which is what you seem to assume. My proposed prior, which we do know how to encode, says “this mathematical structure is all that exists”, with an apriori zero chance for any weird properties.
If the AI is only used to solve certain formally specified questions without any knowledge of an external world, then that sounds much more like a theorem-prover than a strong AI. How could this proposed AI be useful for any of the tasks we’d like an AGI to solve?
An AI living in a simulated universe can be just as intelligent as one living in the real world. You can’t ask it directly to feed African kids but you have many other options, see the discussion at Asking Precise Questions.
An AI living in a simulated universe can be just as intelligent as one living in the real world.
It can be a very good theorem prover, sure. But without access to information about the world, it can’t answer questions like “what is the CEV of humanity like” or “what’s the best way I can make a lot of money” or “translate this book from English to Finnish so that a native speaker will consider it a good translation”. It’s narrow AI, even if it could be broad AI if it were given more information.
The questions you wanted to ask in that thread were poly-time algorithm for SAT, and short proofs for math theorems. For those, why do you need to instantiate an AI in a simulated universe (which allows it to potentially create what we’d consider negative utility within the simulated universe) instead of just running a (relatively simple, sure to lack consciousness) theorem prover?
Is it because you think that being “embodied” helps with ability to do math? Why? And does the reason carry through even if the AI has a prior that assigns probability 1 to a particular universe? (It seems plausible that having experience dealing with empirical uncertainty might be helpful for handling mathematical uncertainty, but that doesn’t apply if you have no empirical uncertainty...)
An AI in a simulated universe can self-improve, which would make it more powerful than the theorem provers of today. I’m not convinced that AI-ish behavior, like self-improvement, requires empirical uncertainty about the universe.
But self improvement doesn’t require interacting with an outside environment (unless “improvement” means increasing computational resources, but the outside being simulated nullifies that). For example, a theorem prover designed to self improve can do so by writing a provably better theorem prover and then transferring control to (i.e., calling) it. Why bother with a simulated universe?
A simulated universe gives precise meaning to “actions” and “utility functions”, as I explained sometime ago. It seems more elegant to give the agent a quined description of itself within the simulated universe, and a utility function over states of that same universe, instead of allowing only actions like “output a provably better version of myself and then call it”.
One example Yudkowsky provides is that of an AI initially designed to solve the Riemann hypothesis, which, upon being upgraded or upgrading itself with superhuman intelligence, tries to develop molecular nanotechnology because it wants to convert all matter in the Solar System into computing material to solve the problem, killing the humans who asked the question.
A prior saying that this is the only universe that exists isn’t very useful, since then it will only treat everything as being part of the sandbox universe. It may very well break out, but think that it’s only exploiting weird hidden properties of the game of life-verse. (Like the way we may exploit quantum mechanics without thinking that we’re breaking out of our universe.)
I have no idea how to encode a prior saying “the universe I observe is all that exists”, which is what you seem to assume. My proposed prior, which we do know how to encode, says “this mathematical structure is all that exists”, with an apriori zero chance for any weird properties.
If the AI is only used to solve certain formally specified questions without any knowledge of an external world, then that sounds much more like a theorem-prover than a strong AI. How could this proposed AI be useful for any of the tasks we’d like an AGI to solve?
An AI living in a simulated universe can be just as intelligent as one living in the real world. You can’t ask it directly to feed African kids but you have many other options, see the discussion at Asking Precise Questions.
It can be a very good theorem prover, sure. But without access to information about the world, it can’t answer questions like “what is the CEV of humanity like” or “what’s the best way I can make a lot of money” or “translate this book from English to Finnish so that a native speaker will consider it a good translation”. It’s narrow AI, even if it could be broad AI if it were given more information.
The questions you wanted to ask in that thread were poly-time algorithm for SAT, and short proofs for math theorems. For those, why do you need to instantiate an AI in a simulated universe (which allows it to potentially create what we’d consider negative utility within the simulated universe) instead of just running a (relatively simple, sure to lack consciousness) theorem prover?
Is it because you think that being “embodied” helps with ability to do math? Why? And does the reason carry through even if the AI has a prior that assigns probability 1 to a particular universe? (It seems plausible that having experience dealing with empirical uncertainty might be helpful for handling mathematical uncertainty, but that doesn’t apply if you have no empirical uncertainty...)
An AI in a simulated universe can self-improve, which would make it more powerful than the theorem provers of today. I’m not convinced that AI-ish behavior, like self-improvement, requires empirical uncertainty about the universe.
But self improvement doesn’t require interacting with an outside environment (unless “improvement” means increasing computational resources, but the outside being simulated nullifies that). For example, a theorem prover designed to self improve can do so by writing a provably better theorem prover and then transferring control to (i.e., calling) it. Why bother with a simulated universe?
A simulated universe gives precise meaning to “actions” and “utility functions”, as I explained sometime ago. It seems more elegant to give the agent a quined description of itself within the simulated universe, and a utility function over states of that same universe, instead of allowing only actions like “output a provably better version of myself and then call it”.
From the FAI wikipedia page:
Cousin_it’s approach may be enough to avoid that.