I see the cost of simply delaying life extension technology as totally acceptable to avoid catastrophe. It’s more difficult to contemplate losing it entirely, but I don’t think think a pause would cause that. It might put it out of reach for people alive today, but what are you going to do about that? Gamble the whole future of humanity?
For many people the answer to that hypothetical is yes. Putting life extension “out of reach” is no small ask. Think about how many billion people you are expecting to die for the possibility of existence of people not even alive right now.
One way to model this crux is discount rate. How many do you value futures you and your currently living descendants would not get to see? For some the answer is zero, this is simply Making Beliefs Pay Rent. If there is no possibility of you personally seeing the outcome you should be indifferent to it.
I suspect the overwhelming majority of living humans make decisions on short, high discount rate timescales and there is a lot of evidence to support this.
Applying even a modest discount rate of say a few percent per year also makes the value of humanity in 30-100 years very small. So again, rational policy should be to seek life extension and never pause, unless you believe in very low discount rates on future positive events.
If you wanted to convince people to support AI pauses you would need to convince them of short, immediate term harm—egregiously misaligned ai, short term job loss—and conceal from them or spread misinformation that ASI is unable to quickly find effective methods of life extension.
Lying is a well established strategy for winning political arguments but if you must resort to lies in a world that will have the technology to unmask them, perhaps you should reexamine your priors.
For many people the answer to that hypothetical is yes.
For a handful of people, a large chunk of them on this website, the answer is yes. Most people don’t think life extension is possible for them and it isn’t their first concern about AGI. I would bet the majority of people would not want to gamble with the possibility of everyone dying of an AGI because it might under a subset of scenarios extend their lives.
I think the most coherent argument above is discount rate. Using the discount rate model you and I are both wrong. Since AGI is an unpredictable number of years away, as well as life extension, neither of us has any meaningful support for our positions among the voting public. You need to show the immediacy of your concerns about AGI, I need to show life extension driven by AGI beginning to visibly work.
AI pauses will not happen due to this discounting. (So it’s irrelevant whether or not they are good or bad). That’s because the threat is far away and uncertain, while the possible money to be made is near and essentially all the investment money on earth “wants” to bet on AI/AGI. (As rationally speaking there is no greater expected roi)
Please note I am sympathetic to your position, I am saying “will not happen” as a strong prediction based on the evidence, not what I want to happen.
I see the cost of simply delaying life extension technology as totally acceptable to avoid catastrophe. It’s more difficult to contemplate losing it entirely, but I don’t think think a pause would cause that. It might put it out of reach for people alive today, but what are you going to do about that? Gamble the whole future of humanity?
For many people the answer to that hypothetical is yes. Putting life extension “out of reach” is no small ask. Think about how many billion people you are expecting to die for the possibility of existence of people not even alive right now.
One way to model this crux is discount rate. How many do you value futures you and your currently living descendants would not get to see? For some the answer is zero, this is simply Making Beliefs Pay Rent. If there is no possibility of you personally seeing the outcome you should be indifferent to it.
I suspect the overwhelming majority of living humans make decisions on short, high discount rate timescales and there is a lot of evidence to support this.
Applying even a modest discount rate of say a few percent per year also makes the value of humanity in 30-100 years very small. So again, rational policy should be to seek life extension and never pause, unless you believe in very low discount rates on future positive events.
If you wanted to convince people to support AI pauses you would need to convince them of short, immediate term harm—egregiously misaligned ai, short term job loss—and conceal from them or spread misinformation that ASI is unable to quickly find effective methods of life extension.
Lying is a well established strategy for winning political arguments but if you must resort to lies in a world that will have the technology to unmask them, perhaps you should reexamine your priors.
For a handful of people, a large chunk of them on this website, the answer is yes. Most people don’t think life extension is possible for them and it isn’t their first concern about AGI. I would bet the majority of people would not want to gamble with the possibility of everyone dying of an AGI because it might under a subset of scenarios extend their lives.
I think the most coherent argument above is discount rate. Using the discount rate model you and I are both wrong. Since AGI is an unpredictable number of years away, as well as life extension, neither of us has any meaningful support for our positions among the voting public. You need to show the immediacy of your concerns about AGI, I need to show life extension driven by AGI beginning to visibly work.
AI pauses will not happen due to this discounting. (So it’s irrelevant whether or not they are good or bad). That’s because the threat is far away and uncertain, while the possible money to be made is near and essentially all the investment money on earth “wants” to bet on AI/AGI. (As rationally speaking there is no greater expected roi)
Please note I am sympathetic to your position, I am saying “will not happen” as a strong prediction based on the evidence, not what I want to happen.
Also the short term harms of AI aren’t lies?