I think that theory of final goals should not be about happiness and sufferings.
My final goals are about infinite evolution etc, and suffering is just a signal that I choose a wrong path or have to call 911. If we fight with the signal, we forget to change the reality and start to live in illusion.
Moreover, I think that value of to be alive is more important than value of happiness.
Hey, turchin, do you mind explaining how you came about your final goals i.e. infinite evolution?
I’m looking for a way to test which final goal is more right. My current best guess for my final goal is, “avoiding pain and promoting play” and I’ve heard someone say, alternatively “beauty in the universe and eyes to see it.” It would be neat if these different goals are reconcilable in some way.
At the beginning, I should note that any goal which is not including immortality is stupid, as infinite existence will include realisation of almost all other goals. So immortality seems to be a good proxy for the best goal. It is better goal than pleasure. Pleasure is always temporary, and somewhat not interesting.
However, there is something bigger than immortality. I call it “to become a God”. But I can’t just jump, or become enlightened, or whatever, it will be not me. I want to go all the way from now to infinitely complex, eternal and superintelligent and benevolent being. I think it is the most interesting way live.
But it is not just fun. It is the meaning of life. And the “meaning” is what makes you work, even if there are no fun ahead. For example, if you care about the survival of your family, it gives you meaning. Or, speaking better, the meaning takes you.
The idea of infinite evolution is also a meaning for the following reasons. There is a basic drive to evolve in every living being. When you choose a beautiful goal, you want to put your genes in the best possible place and create best possible children, and this is a drive that moves evolution. (Not very scientific claim, as sexual selection is not as well accepted as natural selection. So it is more like poetic expression of my feeling about natural drive to evolution). If one educate himself, read, travel etc, it all is parts of this desire for evolution. Even the first AI will immediately find it and start to self-improve.
The desire to evolve is something like Nietzschen “will to power”. But this will is oriented on the infinite space of future possible mind states.
I would add that I spent years working in the theory of happiness. I abandoned it and I feel much better. I don’t need to be happy, I just need to be in working condition to move to my mission: infinitely evolve (it also includes saving humanity from x-risks and giving life extension for all, so my goal is altruistic).
It may look that this goal has smaller prior chances of success, but it is not so for two reasons, one them connected with appearing of superintelligence in near-term, and another is some form of observation selection which will prevent me from seeing my failure. If I merge with superintelligent AI, I could continue my evolution (as well as other people).
There is another point of view, that I often heard from Lesswrongers. That we should not dare to think about our final goals, as superintelligence will provide us with better goals via CEV. However, there is some circularity here, as superinteligence has to extract our values from us, and if we not investing in attempts to articulate them, it could assume that the most popular TV series are the best presentation of the world we want to live. Its “Games of Thrones” and “The Walking Dead”.
Also like username2, I’m happy to hear of others with a view along this direction. A couple years ago I made a brief attempt at starting a modern religion called noendism, with the sole moral of survival. Not necessarily individual survival; on that we may differ.
However since then my core beliefs have evolved a bit and it’s not so simple anymore. For one, after extensive research I’ve convinced myself that personal immortality is practically guaranteed. For another, one of my biggest worries is surviving, but imprisoned in a powerless situation.
Anyway, those details aren’t practically relevant for my day to day life; these similar goals all head in the same direction.
I just want to say you are not alone, as my own goals very closely align with yours (and Jennifer’s as she expressed them in this thread as well.) It’s nice to know that there are other people working towards “infinite evolution” and viewing mental qualia like pain, suffering, and happiness as merely the signals that they are. Ad astra, turchin.
(Also if you know a more focused sub-group to discuss actually implementing and proactively accomplishing such goals, I’d love to join.)
I think that all of us share the same subgoal for the next 100 years—prevent x-risks and personal short-term mortality via aging and accidential death.
Elon Musk with his Neurallink is looking in the similar direction. He also underlines the importance of “meaning” as something, which connects you with others.
Although a disproportionate number of us share those goals, I think you’d be surprised at the diversity of opinion here. I’ve encountered EA people focused on reducing suffering over personal longevity, fundamentalist environmentalists that value eco diversity over human life, and those who work on AI ‘safety’ with the dream of making an overpowering AI overlord that knows best (a dystopian outcome IMHO).
I think that theory of final goals should not be about happiness and sufferings.
My final goals are about infinite evolution etc, and suffering is just a signal that I choose a wrong path or have to call 911. If we fight with the signal, we forget to change the reality and start to live in illusion.
Moreover, I think that value of to be alive is more important than value of happiness.
+1
Hey, turchin, do you mind explaining how you came about your final goals i.e. infinite evolution?
I’m looking for a way to test which final goal is more right. My current best guess for my final goal is, “avoiding pain and promoting play” and I’ve heard someone say, alternatively “beauty in the universe and eyes to see it.” It would be neat if these different goals are reconcilable in some way.
At the beginning, I should note that any goal which is not including immortality is stupid, as infinite existence will include realisation of almost all other goals. So immortality seems to be a good proxy for the best goal. It is better goal than pleasure. Pleasure is always temporary, and somewhat not interesting.
However, there is something bigger than immortality. I call it “to become a God”. But I can’t just jump, or become enlightened, or whatever, it will be not me. I want to go all the way from now to infinitely complex, eternal and superintelligent and benevolent being. I think it is the most interesting way live.
But it is not just fun. It is the meaning of life. And the “meaning” is what makes you work, even if there are no fun ahead. For example, if you care about the survival of your family, it gives you meaning. Or, speaking better, the meaning takes you.
The idea of infinite evolution is also a meaning for the following reasons. There is a basic drive to evolve in every living being. When you choose a beautiful goal, you want to put your genes in the best possible place and create best possible children, and this is a drive that moves evolution. (Not very scientific claim, as sexual selection is not as well accepted as natural selection. So it is more like poetic expression of my feeling about natural drive to evolution). If one educate himself, read, travel etc, it all is parts of this desire for evolution. Even the first AI will immediately find it and start to self-improve.
The desire to evolve is something like Nietzschen “will to power”. But this will is oriented on the infinite space of future possible mind states.
I would add that I spent years working in the theory of happiness. I abandoned it and I feel much better. I don’t need to be happy, I just need to be in working condition to move to my mission: infinitely evolve (it also includes saving humanity from x-risks and giving life extension for all, so my goal is altruistic).
It may look that this goal has smaller prior chances of success, but it is not so for two reasons, one them connected with appearing of superintelligence in near-term, and another is some form of observation selection which will prevent me from seeing my failure. If I merge with superintelligent AI, I could continue my evolution (as well as other people).
There is another point of view, that I often heard from Lesswrongers. That we should not dare to think about our final goals, as superintelligence will provide us with better goals via CEV. However, there is some circularity here, as superinteligence has to extract our values from us, and if we not investing in attempts to articulate them, it could assume that the most popular TV series are the best presentation of the world we want to live. Its “Games of Thrones” and “The Walking Dead”.
Also like username2, I’m happy to hear of others with a view along this direction. A couple years ago I made a brief attempt at starting a modern religion called noendism, with the sole moral of survival. Not necessarily individual survival; on that we may differ.
However since then my core beliefs have evolved a bit and it’s not so simple anymore. For one, after extensive research I’ve convinced myself that personal immortality is practically guaranteed. For another, one of my biggest worries is surviving, but imprisoned in a powerless situation.
Anyway, those details aren’t practically relevant for my day to day life; these similar goals all head in the same direction.
I just want to say you are not alone, as my own goals very closely align with yours (and Jennifer’s as she expressed them in this thread as well.) It’s nice to know that there are other people working towards “infinite evolution” and viewing mental qualia like pain, suffering, and happiness as merely the signals that they are. Ad astra, turchin.
(Also if you know a more focused sub-group to discuss actually implementing and proactively accomplishing such goals, I’d love to join.)
I think that all of us share the same subgoal for the next 100 years—prevent x-risks and personal short-term mortality via aging and accidential death.
Elon Musk with his Neurallink is looking in the similar direction. He also underlines the importance of “meaning” as something, which connects you with others.
I don’t know about any suitable sub-groups.
That’s a very weak claim. Humans have lots and lots of (sub)goals. What matters is how high is that goal in the hierarchy or ranking of all the goals.
Although a disproportionate number of us share those goals, I think you’d be surprised at the diversity of opinion here. I’ve encountered EA people focused on reducing suffering over personal longevity, fundamentalist environmentalists that value eco diversity over human life, and those who work on AI ‘safety’ with the dream of making an overpowering AI overlord that knows best (a dystopian outcome IMHO).