Not quite, but something similar. I acknowledge that my views might be biased, so I assign some weight to the views of other people. Especially if they are well informed, rational, intelligent and trying to answer the same “ethical” questions I’m interested in.
So it’s not that I have other people’s values as a terminal value among others, but rather that my terminal value is some vague sense of doing something meaningful/altruistic where the exact goal isn’t yet fixed. I have changed my views many times in the past after considering thought experiments and arguments about ethics and I want to keep changing my views in future circumstances that are sufficiently similar.
We posit some set S1 of meaningful/altruistic acts. You want to perform acts in S1. Currently, the metric you use to determine whether an act is meaningful/altruistic is whether it reduces suffering or not. So there is some set (S2) of acts that reduce suffering, and your current belief is that S1 = S2. For example, wireheading and genocide reduce suffering (i.e., are in S2), so it follows that wireheading and genocide are meaningful/altruistic acts (i.e., are in S1), so it follows that you want wireheading and genocide.
And when you say you take moral disagreement seriously, you mean that you take seriously the possibility that in thinking further about ethical questions and discussing them with well informed, rational, intelligent people, you might have some kind of insight that brings you to understand that in fact S1 != S2. At which point you would no longer want wireheading and genocide
Yes, that sounds like it. Of course I have to specify what exactly I mean by “altruistic/meaningful”, and as soon as I do this, the question whether S1=S2 might become very trivial, i.e. a deductive one-line proof. So I’m not completely sure whether the procedure I use makes sense, but it seems to be the only way to make sense of my past selves changing their ethical views. The alternative would be to look at each instance of changing my views as a failure of goal preservation, but that’s not how I want to see it and not how it felt.
Not quite, but something similar. I acknowledge that my views might be biased, so I assign some weight to the views of other people. Especially if they are well informed, rational, intelligent and trying to answer the same “ethical” questions I’m interested in.
So it’s not that I have other people’s values as a terminal value among others, but rather that my terminal value is some vague sense of doing something meaningful/altruistic where the exact goal isn’t yet fixed. I have changed my views many times in the past after considering thought experiments and arguments about ethics and I want to keep changing my views in future circumstances that are sufficiently similar.
Let me echo that back to you to see if I get it.
We posit some set S1 of meaningful/altruistic acts.
You want to perform acts in S1.
Currently, the metric you use to determine whether an act is meaningful/altruistic is whether it reduces suffering or not. So there is some set (S2) of acts that reduce suffering, and your current belief is that S1 = S2.
For example, wireheading and genocide reduce suffering (i.e., are in S2), so it follows that wireheading and genocide are meaningful/altruistic acts (i.e., are in S1), so it follows that you want wireheading and genocide.
And when you say you take moral disagreement seriously, you mean that you take seriously the possibility that in thinking further about ethical questions and discussing them with well informed, rational, intelligent people, you might have some kind of insight that brings you to understand that in fact S1 != S2. At which point you would no longer want wireheading and genocide
Did I get that right?
Yes, that sounds like it. Of course I have to specify what exactly I mean by “altruistic/meaningful”, and as soon as I do this, the question whether S1=S2 might become very trivial, i.e. a deductive one-line proof. So I’m not completely sure whether the procedure I use makes sense, but it seems to be the only way to make sense of my past selves changing their ethical views. The alternative would be to look at each instance of changing my views as a failure of goal preservation, but that’s not how I want to see it and not how it felt.
OK. Thanks.