Thank you for writing this. I usually struggle to find resonating thoughts, but this indeed resonates. Not all of it, but many key points have a reflection that I’m going to share:
Biological immortality (radical life extension) without ASI (and reasonably soon) looks hardly achievable. It’s a difficult topic, but for me even Michael Levin’s talks are not inspiring enough. (I would rather prefer to become a substrate-independent mind, but, again, imagine all the R&D without substantial super-human help.)
I’m a rational egoist (more or less), so I want to see the future and have pretty much nothing to say about the world without me. Enjoying not being alone on the planet is just a personal preference. (I mean, the origin system is good, nice planets and stuff, but what if I want to GTFO?) Also, I don’t trust imaginary agents (gods, evolution, future generations, AGIs), however creating some of them may be rational.
Let’s say that early Yudkowsky has influenced my transhumanist views. To be honest, I feel somewhat betrayed. Here my position is close to what Max More says. Basically, I value the opportunities, even if I don’t like all the risks.
I agree that AI progress is really hard to stop. The scaling leaves possible algorithmic breakthroughs underexplored. There is so much to be done, I believe. The tech world will still be working on it even with mediocre hardware. So we are going to ASI anyway.
And all the alignment plans… Well, yes, they tend to be questionable. For me, creating human-like agency in AI (to negotiate with) is more about capabilities, but that’s a different story.
Thank you for writing this. I usually struggle to find resonating thoughts, but this indeed resonates. Not all of it, but many key points have a reflection that I’m going to share:
Biological immortality (radical life extension) without ASI (and reasonably soon) looks hardly achievable. It’s a difficult topic, but for me even Michael Levin’s talks are not inspiring enough. (I would rather prefer to become a substrate-independent mind, but, again, imagine all the R&D without substantial super-human help.)
I’m a rational egoist (more or less), so I want to see the future and have pretty much nothing to say about the world without me. Enjoying not being alone on the planet is just a personal preference. (I mean, the origin system is good, nice planets and stuff, but what if I want to GTFO?) Also, I don’t trust imaginary agents (gods, evolution, future generations, AGIs), however creating some of them may be rational.
Let’s say that early Yudkowsky has influenced my transhumanist views. To be honest, I feel somewhat betrayed. Here my position is close to what Max More says. Basically, I value the opportunities, even if I don’t like all the risks.
I agree that AI progress is really hard to stop. The scaling leaves possible algorithmic breakthroughs underexplored. There is so much to be done, I believe. The tech world will still be working on it even with mediocre hardware. So we are going to ASI anyway.
And all the alignment plans… Well, yes, they tend to be questionable. For me, creating human-like agency in AI (to negotiate with) is more about capabilities, but that’s a different story.