A 1% probability of “ruin” i.e. total extinction (which you cite is your assessment) would still be more than enough to warrant complete pausing for a lengthy period of time.
There seems to be a basic misunderstanding of expected utility calculations here where people are equating the weighting on an outcome with a simple probability x cost of outcome e.g. if there is a 1% chance of the 8 billion dying the “cost” of that is not 80 million lives (as someone further down this thread computes).
Normally the way you’d think about this (if you want to do math to stuff like this) is to think about what you’d pay to avoid that outcome using Expected Utility.
This weights over the entire probability distribution with their expected (marginal utility). In this case, marginal utility goes to infinity if we go extinct (unless you are in the camp: let the robots take over!) and hence even small risks of it would warrant us doing everything possible to avoid it.
This is essentially precautionary principle territory.
Don’t forget to you are considering precluding medicine that could save or extend all the lives. Theoretically every living human. The “gain” is solely in the loss of future generations unborn who might exist in worlds with safe AGI.
And that’s worth a lot. I am a living human being, evolved to desire the life and flourishing of living human beings. Ensuring a future for humanity is far more important than whether any number of individuals alive today die. I am far more concerned with extending the timeline of humanity than maximizing any short term parameters.
A 1% probability of “ruin” i.e. total extinction (which you cite is your assessment) would still be more than enough to warrant complete pausing for a lengthy period of time.
There seems to be a basic misunderstanding of expected utility calculations here where people are equating the weighting on an outcome with a simple probability x cost of outcome e.g. if there is a 1% chance of the 8 billion dying the “cost” of that is not 80 million lives (as someone further down this thread computes).
Normally the way you’d think about this (if you want to do math to stuff like this) is to think about what you’d pay to avoid that outcome using Expected Utility.
This weights over the entire probability distribution with their expected (marginal utility). In this case, marginal utility goes to infinity if we go extinct (unless you are in the camp: let the robots take over!) and hence even small risks of it would warrant us doing everything possible to avoid it.
This is essentially precautionary principle territory.
Far more than a “lengthy ban” — it justifies an indefinite ban until such time as the probability can be understood, and approaches zero.
Hello Rufus! Welcome to Less Wrong!
Don’t forget to you are considering precluding medicine that could save or extend all the lives. Theoretically every living human. The “gain” is solely in the loss of future generations unborn who might exist in worlds with safe AGI.
And that’s worth a lot. I am a living human being, evolved to desire the life and flourishing of living human beings. Ensuring a future for humanity is far more important than whether any number of individuals alive today die. I am far more concerned with extending the timeline of humanity than maximizing any short term parameters.