A given person’s preference is one thing, but their mind is another. If we do have a personal preference on one hand, and a collection of many people’s preferences on the other, the choice is simple. But the people included in the preference extraction procedure are not the same thing as their preferences. We use a collection of people, not a collection of preferences.
It’s not obvious to me that my personal preference is best described by my own brain and not an extrapolation from as many people’s brains as possible. Maybe I want to calculate, but I’m personally a flawed calculator, as are all the others, each in its own way. By examining as many calculators as possible, I could glimpse a better picture of how correct calculation is done, than I could ever find by only examining myself.
I value what is good not because humans value what is good, and I value whatever I in particular value (as opposed to what other people value) not because it is I who values that. If looking at other people’s minds helps me to figure out what should be valued, then I should do that.
That’s one argument for extrapolating collective volition; however, it’s a simple argument, and I expect that whatever can be found from my mind alone should be enough to reliably present arguments such as this, and thus to decide to go through the investigation of other people if that’s necessary to improve understanding of what I value. Whatever moral flaws specific to my mind exist, shouldn’t be severe enough to destroy this argument, if it’s true, but the argument could also be false. If it’s false, then I lose by defaulting to the collective option, but if it’s true, delegating it to FAI seems like a workable plan.
At the same time, there are likely practical difficulties in getting my mind in particular as the preference source to FAI. If I can’t get my preference in particular, then as close to the common ground for humanity as I can get (a decision to which as many people as possible agree as much as possible) is better for me (by its construction: if it’s better for most of the humanity, it’s also better for me in particular).
A given person’s preference is one thing, but their mind is another. If we do have a personal preference on one hand, and a collection of many people’s preferences on the other, the choice is simple. But the people included in the preference extraction procedure are not the same thing as their preferences. We use a collection of people, not a collection of preferences.
It’s not obvious to me that my personal preference is best described by my own brain and not an extrapolation from as many people’s brains as possible. Maybe I want to calculate, but I’m personally a flawed calculator, as are all the others, each in its own way. By examining as many calculators as possible, I could glimpse a better picture of how correct calculation is done, than I could ever find by only examining myself.
I value what is good not because humans value what is good, and I value whatever I in particular value (as opposed to what other people value) not because it is I who values that. If looking at other people’s minds helps me to figure out what should be valued, then I should do that.
That’s one argument for extrapolating collective volition; however, it’s a simple argument, and I expect that whatever can be found from my mind alone should be enough to reliably present arguments such as this, and thus to decide to go through the investigation of other people if that’s necessary to improve understanding of what I value. Whatever moral flaws specific to my mind exist, shouldn’t be severe enough to destroy this argument, if it’s true, but the argument could also be false. If it’s false, then I lose by defaulting to the collective option, but if it’s true, delegating it to FAI seems like a workable plan.
At the same time, there are likely practical difficulties in getting my mind in particular as the preference source to FAI. If I can’t get my preference in particular, then as close to the common ground for humanity as I can get (a decision to which as many people as possible agree as much as possible) is better for me (by its construction: if it’s better for most of the humanity, it’s also better for me in particular).