Do you actually disagree with anything or are you just trying to ridicule it? Do you think that the possibility that FAI research might increase negative utility is not to be taken seriously? Do you think that world states where faulty FAI designs are implemented have on average higher utility than world states where nobody is alive? If so, what research could I possible do to come to the same conclusion? What arguments do I miss? Do I just have to think about it longer?
Consider the way Eliezer Yudkowsky agrues in favor of FAI research:
Two hundred million years from now, the children’s children’s children of humanity in their galaxy-civilizations, are unlikely to look back and say, “You know, in retrospect, it really would have been worth not colonizing the Herculus supercluster if only we could have saved 80% of species instead of 20%”. I don’t think they’ll spend much time fretting about it at all, really. It is really incredibly hard to make the consequentialist utilitarian case here, as opposed to the warm-fuzzies case.
or
This is crunch time. This is crunch time for the entire human species. … and it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization for any intrinsic thing that you could get nowadays …
Is his style of argumentation any different from mine except that he promises lots of positive utility?
I was just amused by the anticlimacticness of the quoted sentence (or maybe by how it would be anticlimactic anywhere else but here), the way it explains why a living hell for the rest of time is a bad thing by associating it with something so abstract as a dramatic increase in negative utility. That’s all I meant by that.
“Ladies and gentlemen, I believe this machine could create a living hell for the rest of time...”
(audience yawns, people look at their watches)
″...increasing negative utility dramatically!”
(shocked gasps, audience riots)
Do you actually disagree with anything or are you just trying to ridicule it? Do you think that the possibility that FAI research might increase negative utility is not to be taken seriously? Do you think that world states where faulty FAI designs are implemented have on average higher utility than world states where nobody is alive? If so, what research could I possible do to come to the same conclusion? What arguments do I miss? Do I just have to think about it longer?
Consider the way Eliezer Yudkowsky agrues in favor of FAI research:
or
Is his style of argumentation any different from mine except that he promises lots of positive utility?
I was just amused by the anticlimacticness of the quoted sentence (or maybe by how it would be anticlimactic anywhere else but here), the way it explains why a living hell for the rest of time is a bad thing by associating it with something so abstract as a dramatic increase in negative utility. That’s all I meant by that.