I’m having trouble understanding how something generally intelligent in every respect except failure to understand death or that it has a physical body, could be incapable of ever learning or at least acting indistinguishable from one that does know.
For example, how would AIXI act if given the following as part of its utility function:
1) utility function gets multiplied by zero should a certain computer cease to function
2) utility function gets multiplied by zero should certain bits be overwritten except if a sanity check is passed first
Seems to me that such an AI would act as if it had a genocidally dangerous fear of death, even if it doesn’t actually understand the concept.
I’m having trouble understanding how something generally intelligent in every respect except failure to understand death or that it has a physical body, could be incapable of ever learning or at least acting indistinguishable from one that does know.
For example, how would AIXI act if given the following as part of its utility function: 1) utility function gets multiplied by zero should a certain computer cease to function 2) utility function gets multiplied by zero should certain bits be overwritten except if a sanity check is passed first
Seems to me that such an AI would act as if it had a genocidally dangerous fear of death, even if it doesn’t actually understand the concept.
That AI doesn’t drop an anvil on its head(I think...), but it also doesn’t self-improve.