What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others.
You might think your just trying to help, or do something good.
I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
But when it comes to the question, “Don’t I have to want to be as happy as possible?” then the answer is simply “No. If you don’t prefer it, why go there?”
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
What does this even mean? Forcing immortality on people is at least a coherent notion, although I’m pretty sure most users around here support an individual’s right to self-terminate. But if that was what was meant, calling it ‘transhumanism’ is a little off.
On the other hand, is this referring to something handled by the fun theoretic concept of a eudaimonic rate of intelligence increase?
Yes, I know, “jargon jargon jargon buzzword buzzword rationality,” but I couldn’t think of a better way to phrase that. Sorry.
You’re right. I don’t know what Terminal Awareness meant, but I was thinking of something like uploading someone who doesn’t want to be uploaded, or increasing their intelligence (even at a eudaimonic rate) if they insist they like their current intelligence level just fine.
If it actually is coherent to speak of a “eudaimonic rate” of doing something to someone who doesn’t want it done, I need to significantly revise my understanding of the word “eudaimonic”.
I’m thinking that an eudaimonic rate of intelligence increase is one which maximizes our opportunities for learning, new insights, enjoyment, and personal growth, as opposed to an immediate jump to superintelligence. But I can imagine an exceedingly stubborn person who insists that they don’t want their intelligence increasing at all, even after being told that they will be happier and lead a more meaningful life. Once they get smarter, they’ll presumably be happier with it.
Even if we accept that Fun Theory as outlined by Eliezer really is the best thing possible for human beings, there are certainly some who would currently reject it, right?
It seems to me like your trying to enforce your values on others. You might think your just trying to help, or do something good. I’m just a bit skeptical of anyone trying to enforce values rather than inspire or suggest.
Quote:
:s/happy/intelligent
I’m not sure if we have a genuine disagreement, or if we’re disputing definitions. So without talking about eudaimonic anything, which of the following do you disagree with, if any?
What we want should be the basis for a better future, but the better future probably won’t look much like what we currently want.
CEV might point to something like uploading or dramatic intelligence enhancement that lots of people won’t currently want, though by definition it would be part of their extrapolated preferences.
A fair share of the population will probably, if polled, actively oppose what CEV says we really want.
It seems unlikely that the optimal intelligence level is the current one, but some people would probably oppose alteration to their intelligence. This isn’t a question of “Don’t I have to want to be as intelligent as possible?” so much as “Is what I currently want a good guide to my extrapolated volition?”
Most of these give me the heebie-jeebies, but I don’t really disagree with them.