I want to add a little to my stance on utilitarianism. A utilitarian superintelligence would probably kill me and everyone I love, because we are made of atoms that could be used for minds that are more hedonic[1][2][3]. Given a choice between paperclips and utilitarianism, I would still choose utilitarianism. But, if there was a utilitarian TAI project along with a half-decent chance to do something better (by my lights), I would actively oppose the utilitarian project. From my perspective, such a project is essentially enemy combatants.
One way to avoid it is by modifying utilitarianism to only place weight on currently existing people. But this is already not that far from my cooperative bargaining proposal (although still inferior to it, IMO).
Another way to avoid it is by postulating some very strong penalty on death (i.e. discontinuity of personality). But this is not trivial to do, especially without creating other problems. Moreover, from my perspective this kind of thing is hacks trying to work around the core issue, namely that I am not a utilitarian (along with the vast majority of people).
A possible counterargument is, maybe the superhedonic future minds would be sad to contemplate our murder. But, this seems too weak to change the outcome, even assuming that this version of utilitarianism mandates minds who would want to know the truth and care about it, and that this preference is counted towards “utility”.
A utilitarian superintelligence would probably kill me and everyone I love, because we are made of atoms that could be used for minds that are more hedonic
This seems like a reasonable concern about some types of hedonic utilitarianism. To be clear, I’m not aware of any formulation of utilitarianism that doesn’t have serious issues, and I’m also not aware of any formulation of any morality that doesn’t have serious issues.
But, if there was a utilitarian TAI project along with a half-decent chance to do something better (by my lights), I would actively oppose the utilitarian project. From my perspective, such a project is essentially enemy combatants.
Just to be clear, this isn’t in response to something I wrote, right? (I’m definitely not advocating any kind of “utilitarian TAI project” and would be quite scared of such a project myself.)
Moreover, from my perspective this kind of thing is hacks trying to work around the core issue, namely that I am not a utilitarian (along with the vast majority of people).
So what are you (and them) then? What would your utopia look like?
Just to be clear, this isn’t in response to something I wrote, right? (I’m definitely not advocating any kind of “utilitarian TAI project” and would be quite scared of such a project myself.)
No! Sorry, if I gave that impression.
So what are you (and them) then? What would your utopia look like?
Well, I linked my toy model of partiality before. Are you asking about something more concrete?
I have low confidence about this, but my best guess personal utopia would be something like: A lot of cool and interesting things are happening. Some of them are good, some of them are bad (a world in which nothing bad ever happens would be boring). However, there is a limit on how bad something is allowed to be (for example, true death, permanent crippling of someone’s mind and eternal torture are over the line), and overall “happy endings” are more common than “unhappy endings”. Moreover, since it’s my utopia (according to my understanding of the question, we are ignoring the bargaining process and acausal cooperation here), I am among the top along those desirable dimensions which are zero-sum (e.g. play an especially important / “protagonist” role in the events to the extent that it’s impossible for everyone to play such an important role, and have high status to the extent that it’s impossible for everyone to have such high status).
I want to add a little to my stance on utilitarianism. A utilitarian superintelligence would probably kill me and everyone I love, because we are made of atoms that could be used for minds that are more hedonic[1][2][3]. Given a choice between paperclips and utilitarianism, I would still choose utilitarianism. But, if there was a utilitarian TAI project along with a half-decent chance to do something better (by my lights), I would actively oppose the utilitarian project. From my perspective, such a project is essentially enemy combatants.
One way to avoid it is by modifying utilitarianism to only place weight on currently existing people. But this is already not that far from my cooperative bargaining proposal (although still inferior to it, IMO).
Another way to avoid it is by postulating some very strong penalty on death (i.e. discontinuity of personality). But this is not trivial to do, especially without creating other problems. Moreover, from my perspective this kind of thing is hacks trying to work around the core issue, namely that I am not a utilitarian (along with the vast majority of people).
A possible counterargument is, maybe the superhedonic future minds would be sad to contemplate our murder. But, this seems too weak to change the outcome, even assuming that this version of utilitarianism mandates minds who would want to know the truth and care about it, and that this preference is counted towards “utility”.
This seems like a reasonable concern about some types of hedonic utilitarianism. To be clear, I’m not aware of any formulation of utilitarianism that doesn’t have serious issues, and I’m also not aware of any formulation of any morality that doesn’t have serious issues.
Just to be clear, this isn’t in response to something I wrote, right? (I’m definitely not advocating any kind of “utilitarian TAI project” and would be quite scared of such a project myself.)
So what are you (and them) then? What would your utopia look like?
No! Sorry, if I gave that impression.
Well, I linked my toy model of partiality before. Are you asking about something more concrete?
Yeah, I mean aside from how much you care about various other people, what concrete things do you want in your utopia?
I have low confidence about this, but my best guess personal utopia would be something like: A lot of cool and interesting things are happening. Some of them are good, some of them are bad (a world in which nothing bad ever happens would be boring). However, there is a limit on how bad something is allowed to be (for example, true death, permanent crippling of someone’s mind and eternal torture are over the line), and overall “happy endings” are more common than “unhappy endings”. Moreover, since it’s my utopia (according to my understanding of the question, we are ignoring the bargaining process and acausal cooperation here), I am among the top along those desirable dimensions which are zero-sum (e.g. play an especially important / “protagonist” role in the events to the extent that it’s impossible for everyone to play such an important role, and have high status to the extent that it’s impossible for everyone to have such high status).