We expect that post-singularity there will still be limited resources in the form of available computational resources until heat death.
Personally, I expect the FAI to give a baby universe to everyone who wants one, so the question is moot.
If not, I do not expect the FAI to care about past contributions, since its goal would be to maximize something like integral of (fun*population) over time, so the people with the highest fun/resource ratio would be rewarded, most likely those with the lowest IQ, as they would be happy to be injected with the fun drug and kept in suspended animation for as long as possible.
I would bet US$100 that, if asked, Eliezer would say that
the people with the highest fun/resource ratio would be rewarded, most likely those with the lowest IQ, as they would be happy to be injected with the fun drug and kept in suspended animation for as long as possible.
shows a complete misinterpretation of Fun Theory.
I’m not dismissing the possibility of your scenario, just pointing out that SIAI is explicitly excluding that type of outcome from their definition of “Friendly”.
I’m not dismissing the possibility of your scenario, just pointing out that SIAI is explicitly excluding that type of outcome from their definition of “Friendly”.
Only under the unlimited resources assumption, which is not the case here.
Personally, I expect the FAI to give a baby universe to everyone who wants one, so the question is moot.
If not, I do not expect the FAI to care about past contributions, since its goal would be to maximize something like integral of (fun*population) over time, so the people with the highest fun/resource ratio would be rewarded, most likely those with the lowest IQ, as they would be happy to be injected with the fun drug and kept in suspended animation for as long as possible.
That’s not what LW refers to as an FAI, but instead a failed FAI. See posts like this one and this one, and this wiki entry.
I mean it in this sense.
I would bet US$100 that, if asked, Eliezer would say that
shows a complete misinterpretation of Fun Theory.
I’m not dismissing the possibility of your scenario, just pointing out that SIAI is explicitly excluding that type of outcome from their definition of “Friendly”.
Only under the unlimited resources assumption, which is not the case here.
I am explicitly calling that unFriendly given bounded resources.
I don’t know what I expect but that is certainly what I want it to do.