There’s another issue though, in that the benefits of AGI coming soon aren’t considered by the top comment on this thread, and assuming a symmetric or nearly symmetric structure of how much utility it produces, my own values suggest that the positives of AGI outweigh the potential for extinction, especially over longer periods, which is why I have said that capabilities work is net positive.
That would have been like, post WW2, a worldwide agreement not to build nukes. “Suuurrre” all the parties would say. “A weapon that let’s us win even if outnumbered, we don’t need THAt”.
And they would basically all defect on the agreement. The weapon is too powerful. Or one side would honor it and be facing neighbors with nuclear arms and none of their own.
Okay, so seems like our disagreement comes down to two different factors:
We have different value functions, I personally don’t value currently living human >> than future living humans, but I agree with the reasoning that to maximize your personal chance of living forever faster AI is better.
Getting AGI sooner will have much greater positive benefits than simply 20 years of peak happiness for everyone, but for example over billions of years the accumulative effect will be greater than value from a few hundreds of thousands of years of of AGI.
Further I find the idea of everyone agreeing to delaying AGI 20 years to be equally absurd as you suggest Gerald, I just thought is could be a helpful hypothetical scenario for discussing the subject.
There’s another issue though, in that the benefits of AGI coming soon aren’t considered by the top comment on this thread, and assuming a symmetric or nearly symmetric structure of how much utility it produces, my own values suggest that the positives of AGI outweigh the potential for extinction, especially over longer periods, which is why I have said that capabilities work is net positive.
Also how would you agree on a 20 year delay?
That would have been like, post WW2, a worldwide agreement not to build nukes. “Suuurrre” all the parties would say. “A weapon that let’s us win even if outnumbered, we don’t need THAt”.
And they would basically all defect on the agreement. The weapon is too powerful. Or one side would honor it and be facing neighbors with nuclear arms and none of their own.
Okay, so seems like our disagreement comes down to two different factors:
We have different value functions, I personally don’t value currently living human >> than future living humans, but I agree with the reasoning that to maximize your personal chance of living forever faster AI is better.
Getting AGI sooner will have much greater positive benefits than simply 20 years of peak happiness for everyone, but for example over billions of years the accumulative effect will be greater than value from a few hundreds of thousands of years of of AGI.
Further I find the idea of everyone agreeing to delaying AGI 20 years to be equally absurd as you suggest Gerald, I just thought is could be a helpful hypothetical scenario for discussing the subject.