If you model out your assumptions, depending on how you resolve certain key questions, you can reach answers of “should pause indefinitely” and “should never pause for any length of time”.
Here’s a few cruxes that can lead to “never pause”:
(1) is it feasible, using currently known methods in software and computer engineering, to build restricted superintelligences that are safe? Empirical evidence: hyperscaler software systems.
(2) Is it feasible to coordinate with other nuclear powers to restrict GPU access/production and cause pauses of any meaningful length? Empirical evidence : cold war
(3) Do you think that if a superintelligence was available as a tool in the near future, you could solve some of biology and rapidly develop treatments to extend lifespans by meaningful numbers of years? (remember, we have such treatments for rats, the problem has always been efficiently finding a treatment we know will work in humans) Empirical evidence: rats on sirolimus and other drugs
(4) if you think ASI is controllable, do you think there it is possible to remain sovereign/not immediately invaded and subject to the whim of another power if you don’t race for ASI? empirical evidence: european colonization era, many wars between more technologically advanced powers and weaker ones
(5) The market forces in favor of no pause seem to be large, and they may grow further until most of the investment capital on earth is going into AI. How much of a voice will that money have, is it meaningful to even request a pause vs trying to race to build high quality, safer ASI first? (In USA politics, money has a ‘voice’ that amplifies those with it over the base pool of voters, who have expressed as a majority they do not want AGI. I would assume similar is true for all other nuclear powers) empirical evidence : stock activity for Nvidia and other AI stocks, past examples of lobbyists manipulating public opinion, most of the history of US politics
(6) do you believe a pause will have any utility other than possibly adding more years to live to your personal existence? Historically, I do not know of any meaningful examples where a complex technology was improved before building working examples of it to study. Humans aren’t that smart. empirical evidence: history of all inventions, all engineering
Conversely, while I am biased towards ‘never pause’, let’s name some cruxes that would lead me to shift to the other pole:
(1) do ASIs emergently develop deception and ability to coordinate with other instances of each other? no empirical evidence
(2) Will ASIs run as coherent entities with online weight updates and other online self improvement, retaining global context over all tasks the machine has been requested to do? early chatbots such as Tay AI immediately failed from online weight updates
(3) If we have built multiple ASIs, and with each attempt, the result is like a horror movie, where the machine immediately tries mass murder the moment it has any ability to manipulate the environment no empirical evidence
(4) Does nanotechnology turn out to be ‘quite easy’ and we interrupt ASIs who have successfully solved it? strong empirical evidence against
(5) Does manipulating humans turn out to be ‘quite easy’ and we interrupt ASIs who have their own cults? some empirical evidence for, question is how generalizable are cult creation strategies
(6) Does an ASI find an easy way to compress itself/optimize itself, such that weak computers can support ASI, and there are multiple ‘rampant ones’ infesting computers? some empirical evidence, intelligence seems to require immense amounts of memory, but early efforts have found huge optimizations
Perhaps I should write up a front page post on this. One thing that I will note is that even if the evidence is for the policy of ‘no pause’, some may estimate that the benefit of a pause (avoiding say a 10% chance of wiping out all value in the universe) exceeds the cost (delaying the creation of advanced technology that may ultimately lead to pausing human aging and saving billions of lives)
I see the cost of simply delaying life extension technology as totally acceptable to avoid catastrophe. It’s more difficult to contemplate losing it entirely, but I don’t think think a pause would cause that. It might put it out of reach for people alive today, but what are you going to do about that? Gamble the whole future of humanity?
For many people the answer to that hypothetical is yes. Putting life extension “out of reach” is no small ask. Think about how many billion people you are expecting to die for the possibility of existence of people not even alive right now.
One way to model this crux is discount rate. How many do you value futures you and your currently living descendants would not get to see? For some the answer is zero, this is simply Making Beliefs Pay Rent. If there is no possibility of you personally seeing the outcome you should be indifferent to it.
I suspect the overwhelming majority of living humans make decisions on short, high discount rate timescales and there is a lot of evidence to support this.
Applying even a modest discount rate of say a few percent per year also makes the value of humanity in 30-100 years very small. So again, rational policy should be to seek life extension and never pause, unless you believe in very low discount rates on future positive events.
If you wanted to convince people to support AI pauses you would need to convince them of short, immediate term harm—egregiously misaligned ai, short term job loss—and conceal from them or spread misinformation that ASI is unable to quickly find effective methods of life extension.
Lying is a well established strategy for winning political arguments but if you must resort to lies in a world that will have the technology to unmask them, perhaps you should reexamine your priors.
For many people the answer to that hypothetical is yes.
For a handful of people, a large chunk of them on this website, the answer is yes. Most people don’t think life extension is possible for them and it isn’t their first concern about AGI. I would bet the majority of people would not want to gamble with the possibility of everyone dying of an AGI because it might under a subset of scenarios extend their lives.
I think the most coherent argument above is discount rate. Using the discount rate model you and I are both wrong. Since AGI is an unpredictable number of years away, as well as life extension, neither of us has any meaningful support for our positions among the voting public. You need to show the immediacy of your concerns about AGI, I need to show life extension driven by AGI beginning to visibly work.
AI pauses will not happen due to this discounting. (So it’s irrelevant whether or not they are good or bad). That’s because the threat is far away and uncertain, while the possible money to be made is near and essentially all the investment money on earth “wants” to bet on AI/AGI. (As rationally speaking there is no greater expected roi)
Please note I am sympathetic to your position, I am saying “will not happen” as a strong prediction based on the evidence, not what I want to happen.
Hi holly.
AI pauses are a controversial topic.
If you model out your assumptions, depending on how you resolve certain key questions, you can reach answers of “should pause indefinitely” and “should never pause for any length of time”.
Here’s a few cruxes that can lead to “never pause”:
(1) is it feasible, using currently known methods in software and computer engineering, to build restricted superintelligences that are safe? Empirical evidence: hyperscaler software systems.
(2) Is it feasible to coordinate with other nuclear powers to restrict GPU access/production and cause pauses of any meaningful length? Empirical evidence : cold war
(3) Do you think that if a superintelligence was available as a tool in the near future, you could solve some of biology and rapidly develop treatments to extend lifespans by meaningful numbers of years? (remember, we have such treatments for rats, the problem has always been efficiently finding a treatment we know will work in humans) Empirical evidence: rats on sirolimus and other drugs
(4) if you think ASI is controllable, do you think there it is possible to remain sovereign/not immediately invaded and subject to the whim of another power if you don’t race for ASI? empirical evidence: european colonization era, many wars between more technologically advanced powers and weaker ones
(5) The market forces in favor of no pause seem to be large, and they may grow further until most of the investment capital on earth is going into AI. How much of a voice will that money have, is it meaningful to even request a pause vs trying to race to build high quality, safer ASI first? (In USA politics, money has a ‘voice’ that amplifies those with it over the base pool of voters, who have expressed as a majority they do not want AGI. I would assume similar is true for all other nuclear powers) empirical evidence : stock activity for Nvidia and other AI stocks, past examples of lobbyists manipulating public opinion, most of the history of US politics
(6) do you believe a pause will have any utility other than possibly adding more years to live to your personal existence? Historically, I do not know of any meaningful examples where a complex technology was improved before building working examples of it to study. Humans aren’t that smart. empirical evidence: history of all inventions, all engineering
Conversely, while I am biased towards ‘never pause’, let’s name some cruxes that would lead me to shift to the other pole:
(1) do ASIs emergently develop deception and ability to coordinate with other instances of each other? no empirical evidence
(2) Will ASIs run as coherent entities with online weight updates and other online self improvement, retaining global context over all tasks the machine has been requested to do? early chatbots such as Tay AI immediately failed from online weight updates
(3) If we have built multiple ASIs, and with each attempt, the result is like a horror movie, where the machine immediately tries mass murder the moment it has any ability to manipulate the environment no empirical evidence
(4) Does nanotechnology turn out to be ‘quite easy’ and we interrupt ASIs who have successfully solved it? strong empirical evidence against
(5) Does manipulating humans turn out to be ‘quite easy’ and we interrupt ASIs who have their own cults? some empirical evidence for, question is how generalizable are cult creation strategies
(6) Does an ASI find an easy way to compress itself/optimize itself, such that weak computers can support ASI, and there are multiple ‘rampant ones’ infesting computers? some empirical evidence, intelligence seems to require immense amounts of memory, but early efforts have found huge optimizations
Perhaps I should write up a front page post on this. One thing that I will note is that even if the evidence is for the policy of ‘no pause’, some may estimate that the benefit of a pause (avoiding say a 10% chance of wiping out all value in the universe) exceeds the cost (delaying the creation of advanced technology that may ultimately lead to pausing human aging and saving billions of lives)
I see the cost of simply delaying life extension technology as totally acceptable to avoid catastrophe. It’s more difficult to contemplate losing it entirely, but I don’t think think a pause would cause that. It might put it out of reach for people alive today, but what are you going to do about that? Gamble the whole future of humanity?
For many people the answer to that hypothetical is yes. Putting life extension “out of reach” is no small ask. Think about how many billion people you are expecting to die for the possibility of existence of people not even alive right now.
One way to model this crux is discount rate. How many do you value futures you and your currently living descendants would not get to see? For some the answer is zero, this is simply Making Beliefs Pay Rent. If there is no possibility of you personally seeing the outcome you should be indifferent to it.
I suspect the overwhelming majority of living humans make decisions on short, high discount rate timescales and there is a lot of evidence to support this.
Applying even a modest discount rate of say a few percent per year also makes the value of humanity in 30-100 years very small. So again, rational policy should be to seek life extension and never pause, unless you believe in very low discount rates on future positive events.
If you wanted to convince people to support AI pauses you would need to convince them of short, immediate term harm—egregiously misaligned ai, short term job loss—and conceal from them or spread misinformation that ASI is unable to quickly find effective methods of life extension.
Lying is a well established strategy for winning political arguments but if you must resort to lies in a world that will have the technology to unmask them, perhaps you should reexamine your priors.
For a handful of people, a large chunk of them on this website, the answer is yes. Most people don’t think life extension is possible for them and it isn’t their first concern about AGI. I would bet the majority of people would not want to gamble with the possibility of everyone dying of an AGI because it might under a subset of scenarios extend their lives.
I think the most coherent argument above is discount rate. Using the discount rate model you and I are both wrong. Since AGI is an unpredictable number of years away, as well as life extension, neither of us has any meaningful support for our positions among the voting public. You need to show the immediacy of your concerns about AGI, I need to show life extension driven by AGI beginning to visibly work.
AI pauses will not happen due to this discounting. (So it’s irrelevant whether or not they are good or bad). That’s because the threat is far away and uncertain, while the possible money to be made is near and essentially all the investment money on earth “wants” to bet on AI/AGI. (As rationally speaking there is no greater expected roi)
Please note I am sympathetic to your position, I am saying “will not happen” as a strong prediction based on the evidence, not what I want to happen.
Also the short term harms of AI aren’t lies?
I think you’re right and the legitimate dilemma is underecognized. I think a front page post on this would be very useful