And you have an amazingly good tool-user, that still doesn’t innovate or surprise you.
That’s not entirely true. It might surprise us by, say, showing us the precise way to use an endoscopic cauterizer to cut off blood flow to a tumor without any collateral damage. But it can’t, by definition, invent a new tool entirely.
I’m not sure the solution to the AI friendliness problem is “Creating AI that is too narrow-minded to be dangerous”. You throw out most of what is intended to be achieved by AI in the first place, and achieve little more than evolutionary algorithms are already capable of. (If you’re capable of modeling the problem to that extent, you can just toss it, along with the toolset, into an evolutionary algorithm and get something pretty close to just as good.)
But it can’t, by definition, invent a new tool entirely.
Why not? The AI can do anything that is allowed by the laws of physics, and maybe a bit more if we let it. It could invent a molecule which acts as a drug which kills the cancer. It could use the tools in the room to build different tools. It could give us plans for tiny nanobots which enter the bloodstream and target cancer cells. Etc.
Just because an environment is well defined, does not mean you can’t invent anything new.
The AI can do anything that is allowed by the laws of physics
No, it can do anything that its worldview includes, and any operations defined internally. You’re no longer talking about a Godel Machine, and you’ve lost all your safety constraints.
You can give, in theory of course, a formal description of the laws of physics. Then you can ask it to produce a plan or machine which fulfills any constraints you ask. You don’t need to worry about it escaping from the box. Now it’s solution might be terrible without tons of constraints, but it’s at least not optimized to escape from the box or to trick you.
it can’t, by definition, invent a new tool entirely.
Can humans “invent a new tool entirely”, when all we have to work with are a handful of pre-defined quarks, leptons and bosons? AIXI is hard-coded to just use one tool, a Turing Machine; yet the open-endedness of that tool make it infinitely inventive.
We can easily put a machine shop, or any other manufacturing capabilities, into the abstract room. We could ignore the tedious business of manufacturing and just include a Star-Trek-style replicator, which allows the AI to use anything for which is can provide blueprints.
Also, we can easily be surprised by actions taken in the room. For example, we might simulate the room according to known scientific laws, and have it automatically suspend if anything strays too far into uncertain territory. We can then either abort the simulation, if something dangerous or undesirable is happening within, or else perform an experiment to see what would happen in that situation, then feed the result back in and resume. That would be a good way to implement an artificial scientist. Similar ideas are explored in http://lambda-the-ultimate.org/node/4392
Your response ignores the constraints this line of conversation has already engendered. I’m happy to reply to you, but your response doesn’t have anything to do with the conversation that has already taken place.
Let’s suppose the constraint on the AI being unable to update its world model applies. How can it use a tool it has just invented? It can’t update its world model to include that tool.
Supposing it -can- update its world model, but only in reference to new tools it has developed: How do you prevent it from inventing a tool like the psychological manipulation of the experimenters running the simulation?
Then you give the AI more options than just surgery. It has an entire simulated room of tools to work with.
And you have an amazingly good tool-user, that still doesn’t innovate or surprise you.
That’s not entirely true. It might surprise us by, say, showing us the precise way to use an endoscopic cauterizer to cut off blood flow to a tumor without any collateral damage. But it can’t, by definition, invent a new tool entirely.
I’m not sure the solution to the AI friendliness problem is “Creating AI that is too narrow-minded to be dangerous”. You throw out most of what is intended to be achieved by AI in the first place, and achieve little more than evolutionary algorithms are already capable of. (If you’re capable of modeling the problem to that extent, you can just toss it, along with the toolset, into an evolutionary algorithm and get something pretty close to just as good.)
Why not? The AI can do anything that is allowed by the laws of physics, and maybe a bit more if we let it. It could invent a molecule which acts as a drug which kills the cancer. It could use the tools in the room to build different tools. It could give us plans for tiny nanobots which enter the bloodstream and target cancer cells. Etc.
Just because an environment is well defined, does not mean you can’t invent anything new.
No, it can do anything that its worldview includes, and any operations defined internally. You’re no longer talking about a Godel Machine, and you’ve lost all your safety constraints.
You can give, in theory of course, a formal description of the laws of physics. Then you can ask it to produce a plan or machine which fulfills any constraints you ask. You don’t need to worry about it escaping from the box. Now it’s solution might be terrible without tons of constraints, but it’s at least not optimized to escape from the box or to trick you.
Can humans “invent a new tool entirely”, when all we have to work with are a handful of pre-defined quarks, leptons and bosons? AIXI is hard-coded to just use one tool, a Turing Machine; yet the open-endedness of that tool make it infinitely inventive.
We can easily put a machine shop, or any other manufacturing capabilities, into the abstract room. We could ignore the tedious business of manufacturing and just include a Star-Trek-style replicator, which allows the AI to use anything for which is can provide blueprints.
Also, we can easily be surprised by actions taken in the room. For example, we might simulate the room according to known scientific laws, and have it automatically suspend if anything strays too far into uncertain territory. We can then either abort the simulation, if something dangerous or undesirable is happening within, or else perform an experiment to see what would happen in that situation, then feed the result back in and resume. That would be a good way to implement an artificial scientist. Similar ideas are explored in http://lambda-the-ultimate.org/node/4392
Your response ignores the constraints this line of conversation has already engendered. I’m happy to reply to you, but your response doesn’t have anything to do with the conversation that has already taken place.
Let’s suppose the constraint on the AI being unable to update its world model applies. How can it use a tool it has just invented? It can’t update its world model to include that tool.
Supposing it -can- update its world model, but only in reference to new tools it has developed: How do you prevent it from inventing a tool like the psychological manipulation of the experimenters running the simulation?