it can’t, by definition, invent a new tool entirely.
Can humans “invent a new tool entirely”, when all we have to work with are a handful of pre-defined quarks, leptons and bosons? AIXI is hard-coded to just use one tool, a Turing Machine; yet the open-endedness of that tool make it infinitely inventive.
We can easily put a machine shop, or any other manufacturing capabilities, into the abstract room. We could ignore the tedious business of manufacturing and just include a Star-Trek-style replicator, which allows the AI to use anything for which is can provide blueprints.
Also, we can easily be surprised by actions taken in the room. For example, we might simulate the room according to known scientific laws, and have it automatically suspend if anything strays too far into uncertain territory. We can then either abort the simulation, if something dangerous or undesirable is happening within, or else perform an experiment to see what would happen in that situation, then feed the result back in and resume. That would be a good way to implement an artificial scientist. Similar ideas are explored in http://lambda-the-ultimate.org/node/4392
Your response ignores the constraints this line of conversation has already engendered. I’m happy to reply to you, but your response doesn’t have anything to do with the conversation that has already taken place.
Let’s suppose the constraint on the AI being unable to update its world model applies. How can it use a tool it has just invented? It can’t update its world model to include that tool.
Supposing it -can- update its world model, but only in reference to new tools it has developed: How do you prevent it from inventing a tool like the psychological manipulation of the experimenters running the simulation?
Can humans “invent a new tool entirely”, when all we have to work with are a handful of pre-defined quarks, leptons and bosons? AIXI is hard-coded to just use one tool, a Turing Machine; yet the open-endedness of that tool make it infinitely inventive.
We can easily put a machine shop, or any other manufacturing capabilities, into the abstract room. We could ignore the tedious business of manufacturing and just include a Star-Trek-style replicator, which allows the AI to use anything for which is can provide blueprints.
Also, we can easily be surprised by actions taken in the room. For example, we might simulate the room according to known scientific laws, and have it automatically suspend if anything strays too far into uncertain territory. We can then either abort the simulation, if something dangerous or undesirable is happening within, or else perform an experiment to see what would happen in that situation, then feed the result back in and resume. That would be a good way to implement an artificial scientist. Similar ideas are explored in http://lambda-the-ultimate.org/node/4392
Your response ignores the constraints this line of conversation has already engendered. I’m happy to reply to you, but your response doesn’t have anything to do with the conversation that has already taken place.
Let’s suppose the constraint on the AI being unable to update its world model applies. How can it use a tool it has just invented? It can’t update its world model to include that tool.
Supposing it -can- update its world model, but only in reference to new tools it has developed: How do you prevent it from inventing a tool like the psychological manipulation of the experimenters running the simulation?