But assuming it can, why would it be controversial to fulfill the wish(es) of literally everyone, while affecting everything else the least?
Problems:
Extrapolation is poorly defined, and, to me, seems to go in either one of two directions: either you make people more as they would like to be, which throws any ideas of coherence out the window, or you make people ‘better’ a long a specific axis, in which case you’re no longer directing the question back at humanity in a meaningful sense. Even something as simple as removing wrong beliefs (as you imply) would automatically erase any but the very weakest theological notions. There are a lot of people in the world who would die to stop that from happening. So, yes, controversial.
Coherence, one way or another, is unlikely to exist. Humans want a bunch of different things. Smarter, better-informed humans would still want a bunch of different, conflicting things. Trying to satisfy all of them won’t work. Trying to satisfy the majority at the expense of the minorities might get incredibly ugly incredibly fast. I don’t have a better solution at this time, but I don’t think taking some kind of vote over the sum total of humanity is going to produce any kind of coherent plan of action.
Trying to satisfy the majority at the expense of the minorities might get incredibly ugly incredibly fast.
But would that be actually uglier than the status quo? Right now, to a very good approximation, those who were born from the right vagina are satisfied at the expense of those born from the wrong vagina. Is that any better?
I call the Litany of Gendlin on the idea that everyone can’t be fully satisfied at once. And I also call the Fallacy of Gray on the idea that if you can’t do something perfectly, then doing it decently is no better than not doing it at all.
But would that be actually uglier than the status quo?
I don’t know. It conceivably could be, and there would be no possibility of improving it, ever. I’m just saying it might be wise to have a better model before we commit to something for eternity.
For extrapolation to be conceptually plausible, I imagine “knowledge” and “intelligence level” to be independent variables of a mind, knobs to turn. To be sure, this picture looks ridiculous. But assuming, for the sake of argument, that this picture is realizable, extrapolation appears to be definable.
Yes, many religious people wouldn’t want their beliefs erased, but only because they believe them to be true. They wouldn’t oppose increasing their knowledge if they knew it was true knowledge. Cases of belief in belief would be dissolved if it was known that true beliefs were better in all respects, including individual happiness.
Coherence, one way or another, is unlikely to exist. Humans want a bunch of different things...
Yes, I agree with this. But, I believe there exist wishes universal for (extrapolated) humans, among which I think there is the wish for humans to continue existing. I would like for AI to fulfil this wish (and other universal wishes if there are any), while letting people decide everything else for themselves.
Problems:
Extrapolation is poorly defined, and, to me, seems to go in either one of two directions: either you make people more as they would like to be, which throws any ideas of coherence out the window, or you make people ‘better’ a long a specific axis, in which case you’re no longer directing the question back at humanity in a meaningful sense. Even something as simple as removing wrong beliefs (as you imply) would automatically erase any but the very weakest theological notions. There are a lot of people in the world who would die to stop that from happening. So, yes, controversial.
Coherence, one way or another, is unlikely to exist. Humans want a bunch of different things. Smarter, better-informed humans would still want a bunch of different, conflicting things. Trying to satisfy all of them won’t work. Trying to satisfy the majority at the expense of the minorities might get incredibly ugly incredibly fast. I don’t have a better solution at this time, but I don’t think taking some kind of vote over the sum total of humanity is going to produce any kind of coherent plan of action.
But would that be actually uglier than the status quo? Right now, to a very good approximation, those who were born from the right vagina are satisfied at the expense of those born from the wrong vagina. Is that any better?
I call the Litany of Gendlin on the idea that everyone can’t be fully satisfied at once. And I also call the Fallacy of Gray on the idea that if you can’t do something perfectly, then doing it decently is no better than not doing it at all.
I don’t know. It conceivably could be, and there would be no possibility of improving it, ever. I’m just saying it might be wise to have a better model before we commit to something for eternity.
For extrapolation to be conceptually plausible, I imagine “knowledge” and “intelligence level” to be independent variables of a mind, knobs to turn. To be sure, this picture looks ridiculous. But assuming, for the sake of argument, that this picture is realizable, extrapolation appears to be definable.
Yes, many religious people wouldn’t want their beliefs erased, but only because they believe them to be true. They wouldn’t oppose increasing their knowledge if they knew it was true knowledge. Cases of belief in belief would be dissolved if it was known that true beliefs were better in all respects, including individual happiness.
Yes, I agree with this. But, I believe there exist wishes universal for (extrapolated) humans, among which I think there is the wish for humans to continue existing. I would like for AI to fulfil this wish (and other universal wishes if there are any), while letting people decide everything else for themselves.