I just want to say you are not alone, as my own goals very closely align with yours (and Jennifer’s as she expressed them in this thread as well.) It’s nice to know that there are other people working towards “infinite evolution” and viewing mental qualia like pain, suffering, and happiness as merely the signals that they are. Ad astra, turchin.
(Also if you know a more focused sub-group to discuss actually implementing and proactively accomplishing such goals, I’d love to join.)
I think that all of us share the same subgoal for the next 100 years—prevent x-risks and personal short-term mortality via aging and accidential death.
Elon Musk with his Neurallink is looking in the similar direction. He also underlines the importance of “meaning” as something, which connects you with others.
Although a disproportionate number of us share those goals, I think you’d be surprised at the diversity of opinion here. I’ve encountered EA people focused on reducing suffering over personal longevity, fundamentalist environmentalists that value eco diversity over human life, and those who work on AI ‘safety’ with the dream of making an overpowering AI overlord that knows best (a dystopian outcome IMHO).
I just want to say you are not alone, as my own goals very closely align with yours (and Jennifer’s as she expressed them in this thread as well.) It’s nice to know that there are other people working towards “infinite evolution” and viewing mental qualia like pain, suffering, and happiness as merely the signals that they are. Ad astra, turchin.
(Also if you know a more focused sub-group to discuss actually implementing and proactively accomplishing such goals, I’d love to join.)
I think that all of us share the same subgoal for the next 100 years—prevent x-risks and personal short-term mortality via aging and accidential death.
Elon Musk with his Neurallink is looking in the similar direction. He also underlines the importance of “meaning” as something, which connects you with others.
I don’t know about any suitable sub-groups.
That’s a very weak claim. Humans have lots and lots of (sub)goals. What matters is how high is that goal in the hierarchy or ranking of all the goals.
Although a disproportionate number of us share those goals, I think you’d be surprised at the diversity of opinion here. I’ve encountered EA people focused on reducing suffering over personal longevity, fundamentalist environmentalists that value eco diversity over human life, and those who work on AI ‘safety’ with the dream of making an overpowering AI overlord that knows best (a dystopian outcome IMHO).