Remember that no matter what, we’re all going to die eventually, until and unless we cure aging itself.
Not necessarily, there are other options. For example cryonics.
Which I think is important. If our only groups of options were:
1) Release AGI which risks killing all humans with high probability or
2) Don’t do until we’re confident it’s pretty safe it and each human dies before they turn 200.
I can see how some people might think that option 2) guarantees universe looses all value for them personally and choose 1) even if it’s very risky.
However we have also have the following option:
3) Don’t release AGI until we’re confident it’s pretty safe. But do our best to preserve everyone so that they can be revived when we do.
I think this makes waiting much more palatable—even those who care only about some humans currently alive are better off waiting with releasing AGI it’s at least as likely to succeed as cryonics.
(also working directly on solving aging while waiting on AGI might have better payoff profile than rushing AGI anyways)
I think it should be much easier to get good estimate of whether cryonics would work. For example:
if we could simulate individual c. elegans then we know pretty well what kind of info we need to preserve
then we can check if we’re preserving it (even if current methods for extracting all relevant info won’t work for whole human brain because they’re way to slow)
And it’s much less risky path than doing AGI quickly. So I think it’s a mitigation it’d be good to work on, so that waiting to make AI safer is more palatable.
Not necessarily, there are other options. For example cryonics.
Which I think is important. If our only groups of options were:
1) Release AGI which risks killing all humans with high probability or
2) Don’t do until we’re confident it’s pretty safe it and each human dies before they turn 200.
I can see how some people might think that option 2) guarantees universe looses all value for them personally and choose 1) even if it’s very risky.
However we have also have the following option:
3) Don’t release AGI until we’re confident it’s pretty safe. But do our best to preserve everyone so that they can be revived when we do.
I think this makes waiting much more palatable—even those who care only about some humans currently alive are better off waiting with releasing AGI it’s at least as likely to succeed as cryonics.
(also working directly on solving aging while waiting on AGI might have better payoff profile than rushing AGI anyways)
But we have no idea if our current cryonics works. It’s not clear to me whether it’s easier to solve that or to solve aging.
I think it should be much easier to get good estimate of whether cryonics would work. For example:
if we could simulate individual c. elegans then we know pretty well what kind of info we need to preserve
then we can check if we’re preserving it (even if current methods for extracting all relevant info won’t work for whole human brain because they’re way to slow)
And it’s much less risky path than doing AGI quickly. So I think it’s a mitigation it’d be good to work on, so that waiting to make AI safer is more palatable.