First, let me say I appreciate you expressing your viewpoint and it does strike an emotional chord with me. With that said,
Wouldn’t an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an “automatic weapon moratorium” it would resort in a better world.
The problem is Kaiser Wilhelm and other historical leaders are going to say “suuurrrreee”, agree to the deal, and you already know the nasty surprise any power honoring such a deal will face on the battlefield. (Or Stalin would have said “sureee” to such a deal on fission weapons, and we can assume would immediately renege and test the devices in secret, only announcing their existence with a preemptive first strike on the enemies of the USSR).
What’s different now? Is there a property about AGI/ASI that makes such international agreements more feasible?
To add one piece of information that may not be well known: I work on inference accelerator ASICs and they are significantly simpler than GPUs. A large amount of Nvidias stack isn’t actually necessary if pure AI perf/training is your goal. So the only real bottleneck to monitor AI accelerators is that wafer processing equipment currently comes exclusively from asml for the highest end equipment, creating a monitorable supply chain for now. All bets are off if major superpowers build their own domestic equivalents, which they would be strongly incentivized to do in worlds where we know AGI is possible and have built working examples.
Wouldn’t an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an “automatic weapon moratorium” it would resort in a better world.
The problem is Kaiser Wilhelm and other historical leaders are going to say “suuurrrreee”, agree to the deal, and you already know the nasty surprise any power honoring such a deal will face on the battlefield. (Or Stalin would have said “sureee” to such a deal on fission weapons, and we can assume would immediately renege and test the devices in secret, only announcing their existence with a preemptive first strike on the enemies of the USSR).
I might be misunderstanding your point but I wasn’t trying to argue that it’s easy (or even feasible) to make robust international agreements not to develop AGI.
The machine gun and nuclear weapons don’t, AFAICT, fit my argument pattern. Powerful weapons like those certainly make humans easier to slaughter on industrial scales, but since humans are necessary to keep economies and industries and militaries running, military/political leaders have robust incentives to prevent large-scale slaughter of their own citizens and soldiers (and so do their adversaries for their own people). Which OK, this can get done by deterrence or arms-control agreement but it’s also started arms races, preemptive strikes, and wars hot and cold. Nevertheless, the bedrock of “human labor/intelligence is valuable/scarce” creates strong restoring forces towards “don’t senselessly slaughter tons of people”. It is possible to create robust-ish (pretty sure Russia’s cheating with them Novichoks) international agreements against weapons that are better at senseless civilian slaughter than at achieving military objectives, chemical weapons are the notable case.
The salient threat to me isn’t “AGI gives us better ways to kill people” (society has been coping remarkably well with better ways to kill people, up to and including a fleet of portable stars that can be dispatched to vaporize cities in the time it took me to write this comment), the salient threat to me (which seems inherent to the development of AGI/ASI) is “AGI renders the overwhelming majority of humanity economically/socially irrelevant, and therefore the overwhelming majority of humanity loses all agency, meaning, decision-making power, and bargaining power, and is vulnerable to inescapable and abyssal oppression if not outright killing because there’s no longer any robust incentives to keep them alive/happy/productive”.
I agree technological unemployment is a huge potential problem. Though like always the actual problem is aging. I think what people miss is they think of tasks to be done as a fixed pool, you don’t need more than 1 vehicle per person or less, or 1 dwelling, or n hours per year of medical care, or food etc. And neglect how AGI clearly cannot be trusted to do many things regardless of capabilities, there would need to be a fleet of human overseers armed with advanced tools.
It’s just what do you do for a 50 year old truck driver, expecting them to retrain to be an O’Neil colony construction supervisor doesn’t make sense unless you can treat their aging and restore neural plasticity.
Which is itself an immense megaproject not being done. Bet aging research would go a lot faster if we had the functional equivalent of a billion people working on it, and all billion are informed as to everyone else’s research outcomes.
Where I was going for in the analogy was much simpler. You don’t get a choice. In the immediate term, agreeing to not build machine guns and honoring it means you face a rat tat tat when it matters most. Similar for fission weapons, obviously your enemy is going to build a nuclear arsenal and try to vaporize all your key cities in a surprise attack.
The issues you mention happen long term. In the short term you can use AGI to automate many key tasks and become vastly more economically and militarily powerful.
I agree technological unemployment is a huge potential problem. Though like always the actual problem is aging.
I think this is a typical LW bias. No, I don’t enjoy the idea of death. But I would rather live a long and reasonably happy life in a human friendly world and then die when I am old than starve to death as one of the 7.9 billion casualties of the AGI Wars. The idea that there’s some sliver of a chance that in some future immortality is on the table for you, personally, is a delusion. I think life extension is very possible, and true immortality is not. But as things are either would only be on the table for, like, the CEOs of the big AI companies who got their biomarkers registered as part of the alignment protocol so that their product obeys them. Not for you. You’re the peasant whose blood, if necessary, cyber-Elizabeth Bathory will use for her rejuvenation rituals.
That’s never happened historically and aging treatments isn’t immortality, it’s just approximately a life expectancy of 10k years. Do you know who is richer than any CEO you name? Medicare. I bet they would like to stop paying all these medical bills, which would be the case if treated patients had the approximate morbidity rate of young adults.
You also need such treatments to be given at large scales to find and correct the edge cases. A rejuvenation treatment “beta tester” is exactly what it sounds, you will have a higher risk of death but get earlier access. Going to need a lot of beta testers.
The rational, data driven belief is that aging is treatable and that ASI systems with the cognitive capacity to take into account more variables than humans are mentally capable of could be built to systematically attack the problem. Doesn’t mean it will help anyone alive today, there are no guarantees. Because automated systems found whatever treatments are possible, automated systems can deliver the same treatments at low cost.
If you don’t think this is a reasonable conclusion, perhaps you could go into your reasoning. Arguments like you made above are unconvincing.
While it is true that certain esoteric treatments for aging like young blood transfusions are inherently limited in who can benefit, they don’t even work that well and de aged hemopoietic stem cells can be generated in automated laboratories and would be a real treatment everyone can benefit.
The wealthy are not powerful enough to “hoard” treatments, because Medicare et al represent the government, which has a monopoly on violence and incentives to not allow such hoarding.
The wealthy are not powerful enough to “hoard” treatments, because Medicare et al represent the government, which has a monopoly on violence and incentives to not allow such hoarding.
That’s naive. If a private has obedient ASI, they also have a monopoly on violence now. If labour has become superfluous, states have lost all incentive to care about the opinion of people.
I think worlds with the tools to treat most causes of human death ranks strictly higher than a world without those tools. In the same way that a world with running water ranks above worlds without it. Even today not everyone benefits from running water. If you could go back in time would you campaign against developing pipes and pumps because you believed only the rich would ever have running water? (Which was true for a period of time)
I would campaign against lead pipes and support the goths in destroying Rome which likely improved human futures over an alternative of widespread lead piping.
Running water doesn’t create the conditions to permanently disempower almost everyone, AGI does. What I’m talking about isn’t a situation in which initially only the rich benefit but then the tech gets cheaper and trickles down. It’s a permanent trap that destroys democracy and capitalism as we know them.
First, let me say I appreciate you expressing your viewpoint and it does strike an emotional chord with me. With that said,
Wouldn’t an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an “automatic weapon moratorium” it would resort in a better world.
The problem is Kaiser Wilhelm and other historical leaders are going to say “suuurrrreee”, agree to the deal, and you already know the nasty surprise any power honoring such a deal will face on the battlefield. (Or Stalin would have said “sureee” to such a deal on fission weapons, and we can assume would immediately renege and test the devices in secret, only announcing their existence with a preemptive first strike on the enemies of the USSR).
What’s different now? Is there a property about AGI/ASI that makes such international agreements more feasible?
To add one piece of information that may not be well known: I work on inference accelerator ASICs and they are significantly simpler than GPUs. A large amount of Nvidias stack isn’t actually necessary if pure AI perf/training is your goal. So the only real bottleneck to monitor AI accelerators is that wafer processing equipment currently comes exclusively from asml for the highest end equipment, creating a monitorable supply chain for now. All bets are off if major superpowers build their own domestic equivalents, which they would be strongly incentivized to do in worlds where we know AGI is possible and have built working examples.
I might be misunderstanding your point but I wasn’t trying to argue that it’s easy (or even feasible) to make robust international agreements not to develop AGI.
The machine gun and nuclear weapons don’t, AFAICT, fit my argument pattern. Powerful weapons like those certainly make humans easier to slaughter on industrial scales, but since humans are necessary to keep economies and industries and militaries running, military/political leaders have robust incentives to prevent large-scale slaughter of their own citizens and soldiers (and so do their adversaries for their own people). Which OK, this can get done by deterrence or arms-control agreement but it’s also started arms races, preemptive strikes, and wars hot and cold. Nevertheless, the bedrock of “human labor/intelligence is valuable/scarce” creates strong restoring forces towards “don’t senselessly slaughter tons of people”. It is possible to create robust-ish (pretty sure Russia’s cheating with them Novichoks) international agreements against weapons that are better at senseless civilian slaughter than at achieving military objectives, chemical weapons are the notable case.
The salient threat to me isn’t “AGI gives us better ways to kill people” (society has been coping remarkably well with better ways to kill people, up to and including a fleet of portable stars that can be dispatched to vaporize cities in the time it took me to write this comment), the salient threat to me (which seems inherent to the development of AGI/ASI) is “AGI renders the overwhelming majority of humanity economically/socially irrelevant, and therefore the overwhelming majority of humanity loses all agency, meaning, decision-making power, and bargaining power, and is vulnerable to inescapable and abyssal oppression if not outright killing because there’s no longer any robust incentives to keep them alive/happy/productive”.
I agree technological unemployment is a huge potential problem. Though like always the actual problem is aging. I think what people miss is they think of tasks to be done as a fixed pool, you don’t need more than 1 vehicle per person or less, or 1 dwelling, or n hours per year of medical care, or food etc. And neglect how AGI clearly cannot be trusted to do many things regardless of capabilities, there would need to be a fleet of human overseers armed with advanced tools.
It’s just what do you do for a 50 year old truck driver, expecting them to retrain to be an O’Neil colony construction supervisor doesn’t make sense unless you can treat their aging and restore neural plasticity.
Which is itself an immense megaproject not being done. Bet aging research would go a lot faster if we had the functional equivalent of a billion people working on it, and all billion are informed as to everyone else’s research outcomes.
Where I was going for in the analogy was much simpler. You don’t get a choice. In the immediate term, agreeing to not build machine guns and honoring it means you face a rat tat tat when it matters most. Similar for fission weapons, obviously your enemy is going to build a nuclear arsenal and try to vaporize all your key cities in a surprise attack.
The issues you mention happen long term. In the short term you can use AGI to automate many key tasks and become vastly more economically and militarily powerful.
I think this is a typical LW bias. No, I don’t enjoy the idea of death. But I would rather live a long and reasonably happy life in a human friendly world and then die when I am old than starve to death as one of the 7.9 billion casualties of the AGI Wars. The idea that there’s some sliver of a chance that in some future immortality is on the table for you, personally, is a delusion. I think life extension is very possible, and true immortality is not. But as things are either would only be on the table for, like, the CEOs of the big AI companies who got their biomarkers registered as part of the alignment protocol so that their product obeys them. Not for you. You’re the peasant whose blood, if necessary, cyber-Elizabeth Bathory will use for her rejuvenation rituals.
That’s never happened historically and aging treatments isn’t immortality, it’s just approximately a life expectancy of 10k years. Do you know who is richer than any CEO you name? Medicare. I bet they would like to stop paying all these medical bills, which would be the case if treated patients had the approximate morbidity rate of young adults.
You also need such treatments to be given at large scales to find and correct the edge cases. A rejuvenation treatment “beta tester” is exactly what it sounds, you will have a higher risk of death but get earlier access. Going to need a lot of beta testers.
The rational, data driven belief is that aging is treatable and that ASI systems with the cognitive capacity to take into account more variables than humans are mentally capable of could be built to systematically attack the problem. Doesn’t mean it will help anyone alive today, there are no guarantees. Because automated systems found whatever treatments are possible, automated systems can deliver the same treatments at low cost.
If you don’t think this is a reasonable conclusion, perhaps you could go into your reasoning. Arguments like you made above are unconvincing.
While it is true that certain esoteric treatments for aging like young blood transfusions are inherently limited in who can benefit, they don’t even work that well and de aged hemopoietic stem cells can be generated in automated laboratories and would be a real treatment everyone can benefit.
The wealthy are not powerful enough to “hoard” treatments, because Medicare et al represent the government, which has a monopoly on violence and incentives to not allow such hoarding.
That’s naive. If a private has obedient ASI, they also have a monopoly on violence now. If labour has become superfluous, states have lost all incentive to care about the opinion of people.
I think worlds with the tools to treat most causes of human death ranks strictly higher than a world without those tools. In the same way that a world with running water ranks above worlds without it. Even today not everyone benefits from running water. If you could go back in time would you campaign against developing pipes and pumps because you believed only the rich would ever have running water? (Which was true for a period of time)
I would campaign against lead pipes and support the goths in destroying Rome which likely improved human futures over an alternative of widespread lead piping.
Running water doesn’t create the conditions to permanently disempower almost everyone, AGI does. What I’m talking about isn’t a situation in which initially only the rich benefit but then the tech gets cheaper and trickles down. It’s a permanent trap that destroys democracy and capitalism as we know them.