For many people the answer to that hypothetical is yes.
For a handful of people, a large chunk of them on this website, the answer is yes. Most people don’t think life extension is possible for them and it isn’t their first concern about AGI. I would bet the majority of people would not want to gamble with the possibility of everyone dying of an AGI because it might under a subset of scenarios extend their lives.
I think the most coherent argument above is discount rate. Using the discount rate model you and I are both wrong. Since AGI is an unpredictable number of years away, as well as life extension, neither of us has any meaningful support for our positions among the voting public. You need to show the immediacy of your concerns about AGI, I need to show life extension driven by AGI beginning to visibly work.
AI pauses will not happen due to this discounting. (So it’s irrelevant whether or not they are good or bad). That’s because the threat is far away and uncertain, while the possible money to be made is near and essentially all the investment money on earth “wants” to bet on AI/AGI. (As rationally speaking there is no greater expected roi)
Please note I am sympathetic to your position, I am saying “will not happen” as a strong prediction based on the evidence, not what I want to happen.
For a handful of people, a large chunk of them on this website, the answer is yes. Most people don’t think life extension is possible for them and it isn’t their first concern about AGI. I would bet the majority of people would not want to gamble with the possibility of everyone dying of an AGI because it might under a subset of scenarios extend their lives.
I think the most coherent argument above is discount rate. Using the discount rate model you and I are both wrong. Since AGI is an unpredictable number of years away, as well as life extension, neither of us has any meaningful support for our positions among the voting public. You need to show the immediacy of your concerns about AGI, I need to show life extension driven by AGI beginning to visibly work.
AI pauses will not happen due to this discounting. (So it’s irrelevant whether or not they are good or bad). That’s because the threat is far away and uncertain, while the possible money to be made is near and essentially all the investment money on earth “wants” to bet on AI/AGI. (As rationally speaking there is no greater expected roi)
Please note I am sympathetic to your position, I am saying “will not happen” as a strong prediction based on the evidence, not what I want to happen.