I wrote about this in Appendix A of this post. ------ One might look at the rough 50⁄50 chance at immortality given surviving AGI and think “Wow, I should really speed up AGI so I can make it in time!”. But the action space is more something like:
Work on AI safety (transfers probability mass from “die from AGI” to “survive AGI”)
The amount of probability transferred is probably at least a few microdooms per person.
Live healthy and don’t do dangerous things (transfers probability mass from “die before AGI” to “survive until AGI”)
Intuitively, I’m guessing one can transfer around 1 percentage point of probability by doing this.
Do nothing (leaves probability distribution the same)
Preserve your brain if you die before AGI (kind of transfers probability mass from “die before AGI” to “survive until AGI”)
This is a weird edge case in the model and it conditions on various beliefs about preservation technology and whether being “revived” is possible
Delay AGI (transfers probability from “die from AGI” to “survive AGI” and from “survive until AGI” to “die before AGI”)
Accelerate AGI (transfer probability mass from “survive AGI” to “die from AGI” and from “die before AGI” to “survive until AGI”)
I think working on AI safety and healthy living seem like a much better choice than accelerating AI. I’d guess this is true even for a vast majority of purely selfish people.
For altruistic people, working on AI safety clearly trumps any other action in this space as it has huge positive externalities. This is true for people who only care about current human lives (as one microdoom ≈ 8,000 current human lives saved), and it’s especially true for people who place value on future lives as well (as one microdoom = one millionth of the value of the entire long term future).
This is a very simplified view of what it means to accelerate or delay AGI, which ignores that there are different ways to shift AGI timelines that transfer probability mass differently. In this model I assume that as timelines get longer, our probability of surviving AGI increases monotonically, but there are various considerations that make this assumption shaky and not generally true for every possible way to shift timelines (such as overhangs, different actors being able to catch up to top labs, etc.)
For most people in their 20s or 30s, it is quite unlikely (around 10%) that they die before AGI. And if you basically place any value on the lives of people other than yourself, then the positive externalities of working on AI safety probably strongly outweigh anything else you could be doing.
Acceleration probably only makes sense for people who are (1) extremely selfish (value their life more than everything else combined) and (2) likely to die before AGI unless it’s accelerated.
Most longevity researchers will still be super-skeptical if you say AGI is going to solve LEV in our lifetimes (one could say—a la Structure of Scientific Revolutions logic—that most of them have a blindspot for recent AGI progress—but AGI=>LEV is still handwavy logic)
Last year’s developments were fast enough for me to be somewhat more relaxed on this issue… (however, there is still slowing core aging rate/neuroplasticity loss down, which acts on shorter timelines, and still important if you want to do your best work)
Another thing to bear in mind is optimal trajectory to human immortality vs expected profit maximizing path for AI corps At some point, likely very soon, we’ll have powerful enough AI to solve ageing, which then makes further acceleration very -ve utility for humans
I don’t know whether to believe, but it’s a reasonable take...
I wrote about this in Appendix A of this post.
------
One might look at the rough 50⁄50 chance at immortality given surviving AGI and think “Wow, I should really speed up AGI so I can make it in time!”. But the action space is more something like:
Work on AI safety (transfers probability mass from “die from AGI” to “survive AGI”)
The amount of probability transferred is probably at least a few microdooms per person.
Live healthy and don’t do dangerous things (transfers probability mass from “die before AGI” to “survive until AGI”)
Intuitively, I’m guessing one can transfer around 1 percentage point of probability by doing this.
Do nothing (leaves probability distribution the same)
Preserve your brain if you die before AGI (kind of transfers probability mass from “die before AGI” to “survive until AGI”)
This is a weird edge case in the model and it conditions on various beliefs about preservation technology and whether being “revived” is possible
Delay AGI (transfers probability from “die from AGI” to “survive AGI” and from “survive until AGI” to “die before AGI”)
Accelerate AGI (transfer probability mass from “survive AGI” to “die from AGI” and from “die before AGI” to “survive until AGI”)
I think working on AI safety and healthy living seem like a much better choice than accelerating AI. I’d guess this is true even for a vast majority of purely selfish people.
For altruistic people, working on AI safety clearly trumps any other action in this space as it has huge positive externalities. This is true for people who only care about current human lives (as one microdoom ≈ 8,000 current human lives saved), and it’s especially true for people who place value on future lives as well (as one microdoom = one millionth of the value of the entire long term future).
This is a very simplified view of what it means to accelerate or delay AGI, which ignores that there are different ways to shift AGI timelines that transfer probability mass differently. In this model I assume that as timelines get longer, our probability of surviving AGI increases monotonically, but there are various considerations that make this assumption shaky and not generally true for every possible way to shift timelines (such as overhangs, different actors being able to catch up to top labs, etc.)
For most people in their 20s or 30s, it is quite unlikely (around 10%) that they die before AGI. And if you basically place any value on the lives of people other than yourself, then the positive externalities of working on AI safety probably strongly outweigh anything else you could be doing.
Acceleration probably only makes sense for people who are (1) extremely selfish (value their life more than everything else combined) and (2) likely to die before AGI unless it’s accelerated.
“10% is overconfident”, given huge uncertainty over AGI takeoff (especially the geopolitical landscape of it), and especially given the probability that AGI development may be somehow slowed (https://twitter.com/jachaseyoung/status/1723325057056010680 )
Most longevity researchers will still be super-skeptical if you say AGI is going to solve LEV in our lifetimes (one could say—a la Structure of Scientific Revolutions logic—that most of them have a blindspot for recent AGI progress—but AGI=>LEV is still handwavy logic)
Last year’s developments were fast enough for me to be somewhat more relaxed on this issue… (however, there is still slowing core aging rate/neuroplasticity loss down, which acts on shorter timelines, and still important if you want to do your best work)
https://twitter.com/search?q=from%3A%40RokoMijic%20immortality&src=typed_query
I don’t know whether to believe, but it’s a reasonable take...