Humans did invent hypotheses like, “complete brain damage allows the mind to escape to a better place”, but there seems to be a strong case for the claim that humans are far more confident in such hypotheses than they should be, given the evidence.
It can be also argued that even humans who claim to believe in immortal souls don’t actually use this belief instrumentally: religious people don’t drop anvils on their heads to “allow the mind to escape to a better place”, unless they are insane. Even religious suicide terrorists generally have political or personal motives (e.g. increasing the status of their family members), they don’t really blow themselves up or fly planes into buildings for the 72 virgins.
You are mentioning some aspects keeping from or motivating for suicide. This is the whole point. Suicide is a thinkable option. It just doesn’t happen so often because—no wonder—it is heavily selected against. There are lots of physical, psychical and social feedbacks in place that ensure it happens seldom. But that is nothing different from providing comparable training to AIXI.
And it appears that depite of all these checks it is still possible to navigate people out of these checks (which is not much differnt from AIXI deriving solutions evading checks) to commit suicide. I e.g. remember a news story (disclaimer!) where a cultist fraudster convinced unhappy people to gift their wellth to some other person and commit suicide with the cultistly embellished promise that they’d awake in the body of the other person at another place. Now that wouldn’t convince me, but could it convince AIXI? (“questions ending with a ‘?’ mean no”)
You are mentioning some aspects keeping from or motivating for suicide. This is the whole point. Suicide is a thinkable option.
Yes, but people generally know what it entails. We don’t want an AI agent to be completely incapable of destroying itself. We don’t want it do destroy itself without a good cause.
Crashing with its spaceship on an incoming asteroid to deflect it away from Earth would be a good cause, for instance.
a cultist fraudster convinced unhappy people to gift their wellth to some other person and commit suicide with the cultistly embellished promise that they’d awake in the body of the other person at another place. Now that wouldn’t convince me, but could it convince AIXI?
If AIXI had a sufficient amount of experience of the world, I think it couldn’t.
religious people don’t drop anvils on their heads to “allow the mind to escape to a better place”
In most religions with the concept of afterlife and heaven there is a very explicit prohibition on suicide. Dropping an anvil on your head is promised to lead to your mind being locked in a “worse place”.
Religious people also tend wear helmets when they are in places where heavy stuff can accidentally fall on their heads, they go to the hospital when they are sick and generally will to invest a large amount of money and effort in staying alive. Unless you define suicide to include failing to do anything in your power (within moral and legal constraints) to prevent your death as long as possible, the willingness of religious people to stay alive can’t be explained just as complying with the ban on suicide.
On the other hand, the religious ban on suicide can be easily explained as a way to reconcile the explicitly stated belief that death “allows the mind to escape to a better place”, with the implicit but effective belief that death actually sucks.
It can be also argued that even humans who claim to believe in immortal souls don’t actually use this belief instrumentally: religious people don’t drop anvils on their heads to “allow the mind to escape to a better place”, unless they are insane. Even religious suicide terrorists generally have political or personal motives (e.g. increasing the status of their family members), they don’t really blow themselves up or fly planes into buildings for the 72 virgins.
You are mentioning some aspects keeping from or motivating for suicide. This is the whole point. Suicide is a thinkable option. It just doesn’t happen so often because—no wonder—it is heavily selected against. There are lots of physical, psychical and social feedbacks in place that ensure it happens seldom. But that is nothing different from providing comparable training to AIXI.
And it appears that depite of all these checks it is still possible to navigate people out of these checks (which is not much differnt from AIXI deriving solutions evading checks) to commit suicide. I e.g. remember a news story (disclaimer!) where a cultist fraudster convinced unhappy people to gift their wellth to some other person and commit suicide with the cultistly embellished promise that they’d awake in the body of the other person at another place. Now that wouldn’t convince me, but could it convince AIXI? (“questions ending with a ‘?’ mean no”)
Yes, but people generally know what it entails.
We don’t want an AI agent to be completely incapable of destroying itself. We don’t want it do destroy itself without a good cause. Crashing with its spaceship on an incoming asteroid to deflect it away from Earth would be a good cause, for instance.
If AIXI had a sufficient amount of experience of the world, I think it couldn’t.
In most religions with the concept of afterlife and heaven there is a very explicit prohibition on suicide. Dropping an anvil on your head is promised to lead to your mind being locked in a “worse place”.
Religious people also tend wear helmets when they are in places where heavy stuff can accidentally fall on their heads, they go to the hospital when they are sick and generally will to invest a large amount of money and effort in staying alive.
Unless you define suicide to include failing to do anything in your power (within moral and legal constraints) to prevent your death as long as possible, the willingness of religious people to stay alive can’t be explained just as complying with the ban on suicide.
On the other hand, the religious ban on suicide can be easily explained as a way to reconcile the explicitly stated belief that death “allows the mind to escape to a better place”, with the implicit but effective belief that death actually sucks.