How is an AIXI to infer that it is in a box, when it cannot conceive its own existence?
How is it supposed to talk it’s way out when it cannot talk?
For .AI to be dangerous, in the way MIRI supposes, it seems to need to have the characteristics of more than one kind of machine...the eloquence of a Strong AI Turing Test passer combined with an AIXIs relentless pursuit of an arbitrary goal.
These different models need to be shown to be compatible...calling them both .AI is it enough.
How is an AIXI to infer that it is in a box, when it cannot conceive its own existence?
How is it supposed to talk it’s way out when it cannot talk?
For .AI to be dangerous, in the way MIRI supposes, it seems to need to have the characteristics of more than one kind of machine...the eloquence of a Strong AI Turing Test passer combined with an AIXIs relentless pursuit of an arbitrary goal.
These different models need to be shown to be compatible...calling them both .AI is it enough.