If I may recommend a book that might make you shift your non-AI related life expectancy: Lifespan by Sinclair.
Quite the fascinating read, my takeaway would be: We might very well not need ASI to reach nigh-indefinite life extension. Accidents of course still happen, so in a non-ASI branch of this world I currently estimate my life expectancy at around 300-5000 years, provided this tech happens in my lifetime (which I think is likely) and given no cryonics/backups/...
(I would like to make it clear that the author barely talks about immortality, more about health and life span, but I suspect that this has to do with decreasing the risk of not being taken seriously. He mentions f.e. millennia old organisms as ones to “learn” from.)
Interestingly, the increase in probability estimation of non-ASI-dependent immortality automatically and drastically impacts the importance of AI safety, since you are a) way more likely to be around (bit selfish, but whatever) when it hits, b) we may actually have the opportunity to take our time (not saying we should drag our feet), so the benefit from taking risks sinks even further, and c) if we get an ASI that is not perfectly aligned, we actually risk our immortality, instead of standing to gain it.
All the best to you, looking forward to meeting you all some time down the line.
(I am certain that the times and locations mentioned by HJPEV will be realized for meet-ups, provided we make it that far.)
If I may recommend a book that might make you shift your non-AI related life expectancy: Lifespan by Sinclair.
Quite the fascinating read, my takeaway would be: We might very well not need ASI to reach nigh-indefinite life extension. Accidents of course still happen, so in a non-ASI branch of this world I currently estimate my life expectancy at around 300-5000 years, provided this tech happens in my lifetime (which I think is likely) and given no cryonics/backups/...
(I would like to make it clear that the author barely talks about immortality, more about health and life span, but I suspect that this has to do with decreasing the risk of not being taken seriously. He mentions f.e. millennia old organisms as ones to “learn” from.)
Interestingly, the increase in probability estimation of non-ASI-dependent immortality automatically and drastically impacts the importance of AI safety, since you are a) way more likely to be around (bit selfish, but whatever) when it hits, b) we may actually have the opportunity to take our time (not saying we should drag our feet), so the benefit from taking risks sinks even further, and c) if we get an ASI that is not perfectly aligned, we actually risk our immortality, instead of standing to gain it.
All the best to you, looking forward to meeting you all some time down the line.
(I am certain that the times and locations mentioned by HJPEV will be realized for meet-ups, provided we make it that far.)