If not, and you’re just restricting the arena of discourse to utility-based-on-concern rather than utility-in-general, then OK… within that restricted context, I agree.
yes...we agree
If it’s true that negative affect and negative utility are roughly synonymous, it’s impossible to make a being that negatively values torture and doesn’t feel bad when seeing torture.
Shit I’m in a contradiction. Okay, I’ve messed up by using “affect” under multiple definitions, my mistake.
Reformatting...
1) There are many mechanisms for creating beings that can be modeled as agents with utility
2) Let us define Affect as the mechanism that defines utility in humans—aka emotion.
So now....
3) Do moral considerations apply to all affect, or all things that approximate utility?
if we meet aliens, what do we judge them by?
They aren’t going to be made out of neurons. Our definitions of “emotion” are probably not going to apply. But they might be like us—they might cooperate among themselves and they might cooperate with us. We might feel empathy for them. A moral system which disregards the preferences of beings simply because affect is not involved in implementing their minds seems to not match my moral system. I’d want to be able to treat aliens well.
I have a dream that all beings that can be approximated as agents will be judged by their actions, and not any trivial specifics of how their algorithm is implemented.
I’d feel some empathy for a FAI too. Even it it doesn’t have emotions, it understands them. It’s utility function puts it in the class of beings I’d call “good”. My social instincts seem to apply to it—I’m friendly to it the same way it is friendly to me.
So, what I’m saying is that “affect’ and “utility” are morally equivalent. Even though there are multiple paths to utility they all carry similar moral weight.
If you remove “concern” and replace it with a signal that has the same result on actions as concern, then maybe “concern” and the signal are morally equivalent.
Do you further agree that it follows from this that there is some hard limit to which it makes sense to self-modify to avoid certain negative emotions?
(We can replace the negative emotions with other processes that have the same behavioral effect, but making someone undergo said other processes would be morally equivalent to making them undergo a negative emotion, so there isn’t a point in doing so)
Do you further agree that it follows from this that there is some hard limit to which it makes sense to self-modify to avoid certain negative emotions?
I don’t agree that it follows, no, though I do agree that there’s probably some threshold above which losing the ability to experience the emotions we currently experience leaves us worse off.
I also don’t agree that eliminating an emotion while adding a new process that preserves certain effects of that emotion which I value is equivalent (morally or otherwise) to preserving the emotion. More generally, I don’t agree with your whole enterprise of equating emotions with utility shifts. They are different things.
yes...we agree
Shit I’m in a contradiction. Okay, I’ve messed up by using “affect” under multiple definitions, my mistake.
Reformatting...
1) There are many mechanisms for creating beings that can be modeled as agents with utility 2) Let us define Affect as the mechanism that defines utility in humans—aka emotion.
So now....
3) Do moral considerations apply to all affect, or all things that approximate utility?
if we meet aliens, what do we judge them by?
They aren’t going to be made out of neurons. Our definitions of “emotion” are probably not going to apply. But they might be like us—they might cooperate among themselves and they might cooperate with us. We might feel empathy for them. A moral system which disregards the preferences of beings simply because affect is not involved in implementing their minds seems to not match my moral system. I’d want to be able to treat aliens well.
I have a dream that all beings that can be approximated as agents will be judged by their actions, and not any trivial specifics of how their algorithm is implemented.
I’d feel some empathy for a FAI too. Even it it doesn’t have emotions, it understands them. It’s utility function puts it in the class of beings I’d call “good”. My social instincts seem to apply to it—I’m friendly to it the same way it is friendly to me.
So, what I’m saying is that “affect’ and “utility” are morally equivalent. Even though there are multiple paths to utility they all carry similar moral weight.
If you remove “concern” and replace it with a signal that has the same result on actions as concern, then maybe “concern” and the signal are morally equivalent.
I agree that distinct processes that result in roughly equivalent utility shifts are roughly morally equivalent.
Do you further agree that it follows from this that there is some hard limit to which it makes sense to self-modify to avoid certain negative emotions?
(We can replace the negative emotions with other processes that have the same behavioral effect, but making someone undergo said other processes would be morally equivalent to making them undergo a negative emotion, so there isn’t a point in doing so)
I don’t agree that it follows, no, though I do agree that there’s probably some threshold above which losing the ability to experience the emotions we currently experience leaves us worse off.
I also don’t agree that eliminating an emotion while adding a new process that preserves certain effects of that emotion which I value is equivalent (morally or otherwise) to preserving the emotion. More generally, I don’t agree with your whole enterprise of equating emotions with utility shifts. They are different things.