The people who are potentially going to get us killed by pushing AI capability research don’t want to die.
Our most basic goals have been aligned all this time, so it’s really tragic that we are in the situation we’re in right now. How did this happen?
First of all, it is a fact that people have differing opinions on how likely it is that AI will kill us all. Some think the probability is 99%+, and some others think it’s something like 10%. But even the most optimistic capability researchers acknowledge that it’s not 0.
If it’s not 0, why are they taking the risk? Sure, there are huge economic incentives, but I think there’s a deeper root cause.
Let me quote Tim Urban from Wait but Why in The AI Revolution: Our Immortality or Extinction:
“If ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up. We have a chance to be the humans that gave all future humans the gift of life, and maybe even the gift of painless, everlasting life. Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.
When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right—no matter how long we need to spend in order to do so.
But thennnnnn
I think about not dying.
Not. Dying.
And then I might consider that humanity’s music and art is good, but it’s not that good, and a lot of it is actually just bad. And a lot of people’s laughter is annoying, and those millions of future people aren’t actually hoping for anything because they don’t exist. And maybe we don’t need to be over-the-top cautious, since who really wants to do that?
Cause what a massive bummer if humans figure out how to cure death right after I die.”
I think this is it. People are terrified of death, and they want to get AGI and ASI as soon as possible cause they have a “nothing to lose” mindset. Aging is going to kill us all, so we better burn the ships and try to build a god.
Most of the important people in AI are aware of longevity research. A handful of them have invested in it. But there’s not much hype about it these days. All the eyes are on AI development. AGI is seen as the ultimate radical life extension provider.
The idea that crosses my mind at this very moment is the following: what if we gave them what they want? What if the world saw full age reversal in humans before the advent of AGI?
This is mere speculation on my part, but maybe this would induce in a lot of people the emotional shift that we need them to have. Cause it would change the stakes. The “nothing to lose” mindset would be gone. If they fuck up, they will be sacrificing an indefinitely long lifespan. So they have all the incentives in the world to be careful.
How hard is it to solve the alignment problem on the first try? How hard is it to cure aging? I don’t have an exact answer for these, but I’m assuming that curing aging is easier. So focusing on longevity research could also be a good way of dying with dignity.
Focusing on longevity research as a way to avoid the AI apocalypse
I don’t want to die.
You don’t want to die.
The people who are potentially going to get us killed by pushing AI capability research don’t want to die.
Our most basic goals have been aligned all this time, so it’s really tragic that we are in the situation we’re in right now. How did this happen?
First of all, it is a fact that people have differing opinions on how likely it is that AI will kill us all. Some think the probability is 99%+, and some others think it’s something like 10%. But even the most optimistic capability researchers acknowledge that it’s not 0.
If it’s not 0, why are they taking the risk? Sure, there are huge economic incentives, but I think there’s a deeper root cause.
Let me quote Tim Urban from Wait but Why in The AI Revolution: Our Immortality or Extinction:
“If ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up. We have a chance to be the humans that gave all future humans the gift of life, and maybe even the gift of painless, everlasting life. Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.
When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right—no matter how long we need to spend in order to do so.
But thennnnnn
I think about not dying.
Not. Dying.
And then I might consider that humanity’s music and art is good, but it’s not that good, and a lot of it is actually just bad. And a lot of people’s laughter is annoying, and those millions of future people aren’t actually hoping for anything because they don’t exist. And maybe we don’t need to be over-the-top cautious, since who really wants to do that?
Cause what a massive bummer if humans figure out how to cure death right after I die.”
I think this is it. People are terrified of death, and they want to get AGI and ASI as soon as possible cause they have a “nothing to lose” mindset. Aging is going to kill us all, so we better burn the ships and try to build a god.
Most of the important people in AI are aware of longevity research. A handful of them have invested in it. But there’s not much hype about it these days. All the eyes are on AI development. AGI is seen as the ultimate radical life extension provider.
The idea that crosses my mind at this very moment is the following: what if we gave them what they want? What if the world saw full age reversal in humans before the advent of AGI?
This is mere speculation on my part, but maybe this would induce in a lot of people the emotional shift that we need them to have. Cause it would change the stakes. The “nothing to lose” mindset would be gone. If they fuck up, they will be sacrificing an indefinitely long lifespan. So they have all the incentives in the world to be careful.
How hard is it to solve the alignment problem on the first try? How hard is it to cure aging? I don’t have an exact answer for these, but I’m assuming that curing aging is easier. So focusing on longevity research could also be a good way of dying with dignity.