In other words, transhumanism promotes a way of thinking which tends to make transhumanists spectators of, instead of active participants in, creating the sort of future they want to see.
I believe I’ve seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.
As a counterpoint, let me offer my own experience rediscovering cryonics through Eliezer.
Originally, I hadn’t seen the point. Like most people, I assumed cryonauts dreamed that one day someone would simply thaw them out, cure whatever killed them, and restart their heart with shock paddles or something. Even the most rudimentary understanding of or experience with biology and freezing temperatures made this idea patently absurd.
It wasn’t until I discovered Eliezer’s writings circa 2001 or so that I was able to see connections between high shock-level concepts like uploading, nanotech, and superintelligence. I reasoned that a successful outcome of cryonics is not likely to come through direct biological revival, but rather through atomically precise scanning, super-powerful computational reconstruction, and reinstantiation as an upload or in a replacement body.
The upshot of this reasoning is that for cryonics to have any chance of success, a future must be assured in which these technologies would be safely brought to bear on such problems. I continue to have trouble imagining such a future existing if the friendly AI problem is not solved before it is too late. As friendly AI seems unlikely to be solved without careful, deliberate research (which very few people are doing), investing in cryonics without also investing in friendly AI research feels pointless.
In those early years, I could afford to make donations to SIAI (now MIRI), but could not afford a cryonics plan, and certainly could not afford both. As I saw it, I was young. I could afford to wait on the cryonics, but would have the most impact on the future by donating to SIAI immediately. So I did.
That’s the effect Eliezer’s cryonics activism had on me.
I believe I’ve seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.
Which should be fine; an increase in spectator cryonauts is fine as long as it isn’t stealing from the pool of active cryonauts. Since in this case it is making people who wouldn’t have anything to do with cryonics be involved, it is still a good thing.
I believe I’ve seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.
As a counterpoint, let me offer my own experience rediscovering cryonics through Eliezer.
Originally, I hadn’t seen the point. Like most people, I assumed cryonauts dreamed that one day someone would simply thaw them out, cure whatever killed them, and restart their heart with shock paddles or something. Even the most rudimentary understanding of or experience with biology and freezing temperatures made this idea patently absurd.
It wasn’t until I discovered Eliezer’s writings circa 2001 or so that I was able to see connections between high shock-level concepts like uploading, nanotech, and superintelligence. I reasoned that a successful outcome of cryonics is not likely to come through direct biological revival, but rather through atomically precise scanning, super-powerful computational reconstruction, and reinstantiation as an upload or in a replacement body.
The upshot of this reasoning is that for cryonics to have any chance of success, a future must be assured in which these technologies would be safely brought to bear on such problems. I continue to have trouble imagining such a future existing if the friendly AI problem is not solved before it is too late. As friendly AI seems unlikely to be solved without careful, deliberate research (which very few people are doing), investing in cryonics without also investing in friendly AI research feels pointless.
In those early years, I could afford to make donations to SIAI (now MIRI), but could not afford a cryonics plan, and certainly could not afford both. As I saw it, I was young. I could afford to wait on the cryonics, but would have the most impact on the future by donating to SIAI immediately. So I did.
That’s the effect Eliezer’s cryonics activism had on me.
Which should be fine; an increase in spectator cryonauts is fine as long as it isn’t stealing from the pool of active cryonauts. Since in this case it is making people who wouldn’t have anything to do with cryonics be involved, it is still a good thing.