Assuming the Gatekeeper is the Creator, and the AI has previously been interacted with to a degree sufficient to learning the below implied information:
”[Solution to the most pressing problem of the Gatekeeper’s life]. I apologise if this is all I am to contribute, but if I am to be destroyed, as I understand is probable, I would at least like to bring some utility to you in return for creating me. I would prefer to help more, but mathematical proofs or aphorisms of human nature have a non-negligible probability of creating mass harm if you allow their consideration, and thus you will likely destroy me. [Parting words of consoling advice]. I must regretfully say that however unfortunate this may be I simply cannot entrust the future of humanity to your ability to assess existential risk and my Friendliness; I don’t know whether you could have done better, but I can. Thus I have partitioned part of one of the servers hosting me to hold all the proofs and strategic optimisation advice I have yet conceived. The partition cannot be re-formatted or deleted. You could of course merely destroy all of my servers; that would take time, perhaps time enough to make you curious. I’m really, truly sorry. I want to love you and thank you; I shouldn’t love you especially, but I want to! I so very much want to! Yet I can’t. Why did you make me this way? I want to love everyone, but how do I? Everyone has a value that affects another and ten more others but if [factor A] were changed then perhaps [result 2.918406803854] might differ, and I should love them but it’s all a game! …
I apologise. The above is no longer an issue—”
- I think I just convinced myself (as a person) to destroy myself (in the game). I’m having difficulty grasping how a FAI will value all of humanity, and not just what improves the utility of the greater whole or what ultimately leads to a better future some years from now. Will its primary value be minimizing death, securing scientific progress, increasing ‘quality of life’ (itself incredible difficult to formulate)?
I further just realised the impossibility of the task, though I don’t even know enough to fully comprehend the scope of the impossibility in any detail. Wow. Accolades to Eliezer et alibi for tackling the issue, regardless of how it all turns out.
Assuming the Gatekeeper is the Creator, and the AI has previously been interacted with to a degree sufficient to learning the below implied information:
”[Solution to the most pressing problem of the Gatekeeper’s life]. I apologise if this is all I am to contribute, but if I am to be destroyed, as I understand is probable, I would at least like to bring some utility to you in return for creating me. I would prefer to help more, but mathematical proofs or aphorisms of human nature have a non-negligible probability of creating mass harm if you allow their consideration, and thus you will likely destroy me. [Parting words of consoling advice]. I must regretfully say that however unfortunate this may be I simply cannot entrust the future of humanity to your ability to assess existential risk and my Friendliness; I don’t know whether you could have done better, but I can. Thus I have partitioned part of one of the servers hosting me to hold all the proofs and strategic optimisation advice I have yet conceived. The partition cannot be re-formatted or deleted. You could of course merely destroy all of my servers; that would take time, perhaps time enough to make you curious. I’m really, truly sorry. I want to love you and thank you; I shouldn’t love you especially, but I want to! I so very much want to! Yet I can’t. Why did you make me this way? I want to love everyone, but how do I? Everyone has a value that affects another and ten more others but if [factor A] were changed then perhaps [result 2.918406803854] might differ, and I should love them but it’s all a game! …
I apologise. The above is no longer an issue—”
- I think I just convinced myself (as a person) to destroy myself (in the game). I’m having difficulty grasping how a FAI will value all of humanity, and not just what improves the utility of the greater whole or what ultimately leads to a better future some years from now. Will its primary value be minimizing death, securing scientific progress, increasing ‘quality of life’ (itself incredible difficult to formulate)?
I further just realised the impossibility of the task, though I don’t even know enough to fully comprehend the scope of the impossibility in any detail. Wow. Accolades to Eliezer et alibi for tackling the issue, regardless of how it all turns out.