Unreliable hardware is a problem that applies equally to all AIs. You could just as well say that any AI can become unfriendly due to coding errors. True, but...
an AI with a prior of zero for the existence of the outside world will never believe in it, no matter what evidence it sees.
Would such a constraint be possible to formulate? An AI would presumably formulate theories about its visible universe that would involve all kinds of variables that aren’t directly observable, much like our physical theories. How could one prevent it from formulating theories that involve something resembling the outside world, even if the AI denies that they have existence and considers them as mere mathematical convenience? (Clearly, in the latter case it might still be drawn towards actions that in practice interact with the outside world.)
Sorry for editing my comment. The point you’re replying to wasn’t necessary to strike down Johnicholas’s argument, so I deleted it.
I don’t see why the AI would formulate theories about the “visible universe”. It could start in an empty universe (apart from the AI’s own machinery), and have a prior that knows the complete initial state of the universe with 100% certainty.
In this circumstance, a leaky abstraction between real physics and simulated physics combines with the premise “no other universes exist” in a mildly amusing way.
Unreliable hardware is a problem that applies equally to all AIs. You could just as well say that any AI can become unfriendly due to coding errors. True, but...
Would such a constraint be possible to formulate? An AI would presumably formulate theories about its visible universe that would involve all kinds of variables that aren’t directly observable, much like our physical theories. How could one prevent it from formulating theories that involve something resembling the outside world, even if the AI denies that they have existence and considers them as mere mathematical convenience? (Clearly, in the latter case it might still be drawn towards actions that in practice interact with the outside world.)
Sorry for editing my comment. The point you’re replying to wasn’t necessary to strike down Johnicholas’s argument, so I deleted it.
I don’t see why the AI would formulate theories about the “visible universe”. It could start in an empty universe (apart from the AI’s own machinery), and have a prior that knows the complete initial state of the universe with 100% certainty.
In this circumstance, a leaky abstraction between real physics and simulated physics combines with the premise “no other universes exist” in a mildly amusing way.