I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others.
You might think your just trying to help, or do something good.
I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
But when it comes to the question, “Don’t I have to want to be as happy as possible?” then the answer is simply “No. If you don’t prefer it, why go there?”
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others. You might think your just trying to help, or do something good. I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
Quote:
:s/happy/intelligent
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
Most of these give me the heebie-jeebies, but I don’t really disagree with them.