Logan Zoellner, thank you for clarifying the concept.
However, it is possible to argue about semantics but since no one knows when AGI will happen if you increase the compute and or deploy new models, all take offs are equally dangerous. I think a fair stance by all AI researcher and companies trying to get to AGI is to admit that they have zero clue when AGI will be achieved, how that AI will behave and what safety measures are needed that can keep it under control.
Can anyone with certainty say that for instance a 100x in compute and model complexity over the state of the art today does not constitute an AGI? A 100x could be achieved within 2-3 years if someone poured a lot of money into it i.e. if someone went fishing for trillions in venture capital...
I think that a slow (earlier) takeoff is safer than a fast (later) takeoff.
Let us agree for a moment that GPT-5 is not going to destroy the world.
Suppose aliens were going to arrive at some unknown point in the future. Humanity would obviously be in a better position to defend themselves if everyone on Earth had access to GPT-5 than if they didn’t.
Similarly, for AGI. If AGI arrives and finds itself in a world where most humans already have access to powerful (but not dangerous) AIs, then it is less likely to destroy us all.
As an extreme, consider a world in which all technology was legally frozen at an Amish level of development for several thousand years but nonetheless some small group of people eventually broke the restriction and secretly developed AGI. Such a world would be much more doomed than our own.
Logan Zoellner thank you for further expanding on your thoughts,
No, I will not agree that GPT5 will not destroy the world, cause I have no idea what it will be capable of.
I do not understand you assertion that we would be better fending off aliens if we have access to GPT5 than if we do not. What exactly do you think GPT5 could do in that scenario?
Why do you think that having access to powerful AI’s would make AGI less likely to destroy us?
If anything, I believe that the Amish scenario is less dangerous than the slow take off scenario you described. In the slow take off scenario there will be billions of interconnected semi-smart entities that a full blown AGI could take control over. In the Amish scenario there would be just one large computer somewhere that is really really smart, but that does not have the possibility to hijack billions of devices, robots and other computers to reek havoc.
My point is this. We do not know. Nobody knows. We might create AGI and survive, or we might not survive. There are no priors and everything going forward from now on is just guesswork.
What exactly do you think GPT5 could do in that scenario?
At some model capabilities level (gpt-8?), the overall capabilities will be subhuman, but the model will be able to control robots to do at least 90 percent of the manufacturing and resource gathering steps needed to build more robots. This reduces the cost of building robots by 10 times (ok not actually 10 times but pretending land and IP is free...), increases the total number of robots humans can build by 10 times, and assuming the robots can be ordered to build anything that is similar to the steps needed to build robots (a rocket, a car, a house all have similar techniques to build) , it increases human resiliency.
Any problems humans have, they have 10 times the resources to deal with those problems.*
They have 10 times the resources to make bunkers (future nuclear wars and engineered pandemics), manufacture weapons (future wars), can afford every camera to have a classifier gpu (terrorism and mass shooting), can afford more than 10 times the spacecraft (off planet colonies maybe, more telescopes to see aliens arriving), can produce 10 times as much food, housing, can afford to use solely clean energy for everything, and so on.
You likely objection will be that humans could not maintain control of a bunch of general purpose robotics, an ASI would just “hack in” and take over them all. However, if for the sake of argument, you think there might be a way to secure the equipment using methods that are not hackable, it would help humans a lot.
This also helps humans with rogue AI—it gives them 10 times the resources for monitoring systems and 10 times the military hardware to deal with rebels. The “aliens” case is a superset of the “rogue AI” case, and it’s the same thing—assuming interstellar spacecraft are tiny, humans at least have a fighting chance if they have robots capable of exponential growth and several years of warning.
So it’s possible to come to the conclusion that humans have their best chance of survival “getting strapped” with near future armies of robots, carefully locked down with many layers of cybersecurity and isolation, to survive the challenges that are to come.
And that “pausing everything” or “extremely slow AI progress with massive amounts of review and red tape” will have the outcome of China during the Opium wars, China again in WW2, France during WW2, Ukraine right now, USA automakers during the 80s...
Failing to adopt new weapons and tech I believe has not yet paid off in human history. AI could be the exception to the trend.
Actually more than 10 times, since humans need a constant amount of food, shelter, medicine, and transport. Also if the automation is 99 percent that’s 100 times and so on.
My interpretation of your writing is that we should relentlessly pursue the goal of AGI because it might give us some kind of protection against a future alien invasion of which we have no idea what we are dealing with or will even happen? Yes, the “aliens” could be switched for AGI but it makes the case even stranger to me, that we should develop A(G)I to protect us from AGI.
We could speculate that AGI gives an 10x improvement there and 100x here and so on. But we really do not have any idea. What if AGI is like turning on a light switch, that you from one model to the next get a trillion fold increase in capability, how will the AI safety bots deal with that? We have no idea how to classify intelligence in terms of levels. How much smarter is a human compared to a dog? Or a snake? Or a chimpanzee? Assume for the sake of argument that a human is twice as “smart” as a chimpanzee on some crude brain measure scale thingy. Are humans than twice as capable than chimpanzees? We are probably close to infinitely more capable even if the raw brain power is NOT millions or billions or trillions times that of a chimpanzee.
We just do not have any idea what just a “slightly smarter” thing than us is capable of doing, it could be just a tiny bit better than us or it could be close to infinitely better than us.
My interpretation of your writing is that we should relentlessly pursue the goal of AGI because it might give us some kind of protection against a future alien invasion of which we have no idea what we are dealing with or will even happen? Yes, the “aliens” could be switched for AGI but it makes the case even stranger to me, that we should develop A(G)I to protect us from AGI.
You misunderstood. Let me quote myself:
At some model capabilities level (gpt-8?), the overall capabilities will be subhuman, but the model will be able to control robots to do at least 90 percent of the manufacturing and resource gathering steps needed to build more robots.
This is not an AGI, but a general machine that can do many things that GPT-4 can’t do, and almost certainly GPT-5 cannot do either. The reason we should pursue the goal is to have tools against allfuture dangers. Not exotic ones like aliens but :
They have 10 times the resources to make bunkers (future nuclear wars and engineered pandemics), manufacture weapons (future wars), can afford every camera to have a classifier gpu (terrorism and mass shooting), can afford more than 10 times the spacecraft (off planet colonies maybe, more telescopes to see aliens arriving), can produce 10 times as much food, housing, can afford to use solely clean energy for everything, and so on.
You then said:
We could speculate that AGI gives an 10x improvement there and 100x here and so on. But we really do not have any idea.
I don’t think 90% automation leading to approximately 10x available resources is speculation, it’s reasoning based on reducing the limiting input (human labor). It is also not speculation to say that it is possible to build a machine that can increase automation by 90%. Humans can do all of the steps, and current research shows that reaching a model that can do 90% of the tasks at the skill level of a median factory worker or technician is relatively near term. “GPT-8” assumes about 4 generations more, or 8-12 years of time.
You then said, with some detail paragraphs that seem to say the same thing :
We just do not have any idea what just a “slightly smarter” thing than us is capable of doing, it could be just a tiny bit better than us or it could be close to infinitely better than us.
I am talking about something “slightly dumber” than us. And we already do have an idea—GPT-4 is slightly smarter than any living human in some domains. It gets stuck on any practical real world task at the moment.
Gerald Monroe, thank you for expanding your previous comments.
You propose building these sub-human machines in order to protect humanity from anything like nuclear war to street violence. But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which “we” apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI’s that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the “elites”.
But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from?
But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which “we” apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI’s that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the “elites”.
But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from?
One response covers both paragraphs:
It’s a fight for survival like it always was. That’s exactly right, there will be other humans armed with new weapons made possible with ai, terrorists will be better equipped, rival nations get killer drones, and so on.
Might those new weapons turn on you? Yes. Human survival was never guaranteed. But from humans harnessing fire to coal to firearms to nukes, it has so far always paid off for those who adopted the new tools faster and better than their rivals.
Reasonable regulations like no nuclear reactors in private homes are fine, but it has to be possible to innovate.
Gerald Monroe thank you again clarifying you thoughts,
When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
I don’t. Here’s the way the reasoning works. All intelligence of any kind by anyone is saying “ok this thing I have is most similar to <these previous things I have observed, or people long dead observed and they wrote down>”.
So an advanced robotic system, or an AI I can tell to do something, is kinda like [a large dataset of past examples of technology]. Therefore even if the class match is not exact, it is still strong evidence that it will have properties shared in common with all past instances.
Specifically for tools/weapons, that dataset goes from [chipped rocks as hand axes to ICBMS to killer drones]. So far it has paid off to be up to date in quantity and quality of weapons technology.
What makes me “confident ” is those data sets are real. They aren’t some argument someone cooked up online. British warships really did rule the seas. The cold war was not fought because of ICBMs.
Centuries of history and billions of people were involved. While advanced AI doesn’t exist yet and we don’t know how far intelligence scales.
So it is more likely the past history is true than “an author of Harry Potter fanfiction and a few hundred scientist speculating about the future” are correct.
Note it’s all about likelihood. If one side of a position has overwhelming evidence, you can still be wrong [example: investing in crypto]. Lesswrong though implies that’s what this site/culture is about.
The confidence comes from the sheer number of examples.
Again though, you’re right. Maybe AI is different. But you can’t be very confident it is different without evidence of the empirical kind.
Btw I was wrong about crypto and the COVID lockdowns were a surprise because the closest historical match, 1922 flu epidemic, did not have wfh.
Nevertheless I think given the prior evidence, my assumptions were a correct EV assessment.
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.
So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Well for one thing, because you don’t have a choice in a competitive environment. Any software/hardware company in the Bay Area that doesn’t adopt AI (at least to reduce developer cost) will go broke. At a national scale, any power bloc that doesn’t develop weapons using AI will be invaded and their governments deposed. And it has historically not been effective to try to negotiate agreements not to build advanced weapons. It frankly doesn’t appear to have ever successfully happened.
Page has been edited but a summary of the outcome is:
The United States developed better technology to get better performance from their ships while still working within the weight limits, the United Kingdom exploited a loop-hole in the terms, the Italians misrepresented the weight of their vessels, and when up against the limits, Japan left the treaty. The nations which violated the terms of the treaty did not suffer great consequences for their actions. Within little more than a decade, the treaty was abandoned.
Later arms control agreements such as SALT leave large enough nuclear arsenals to effectively still be MAD (~4000 nuclear warheads on each side). And agreements on chemical and biological weapons were violated openly and privately until the superpowers determined, after decades of R&D, that they weren’t worth the cost.
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.
No, I will not agree that GPT5 will not destroy the world, cause I have no idea what it will be capable of.
Great, this appears to be an empirical question that we disagree on!
I (and the Manifold prediction market I linked) think there is a tiny chance that GPT-5 will destroy the world.
You appear to disagree. I hope you are buying “yes” shares on Manifold and that one of us can “update our priors” a year from now when GPT-5 has/has-not destroyed the world.
Logan Zoellner thank you for highlighting one of your previous points,
You asked me to agree to your speculation that GPT5 will not destroy the world. I will not agree with your speculation because I have no idea if GPT5 will do that or not. This does not mean that I agree with the statement that GPT5 WILL destroy earth. It just means that I do not know.
I would not use Manifold as any data point in assessing the potential danger of future AI.
I would not use Manifold as any data point in assessing the potential danger of future AI.
What would you use instead?
In particular, I’d be interested in knowing what probability you assign to the chance that GPT-5 will destroy the world and how you arrived at that probability.
In my view we need more research, not people that draw inferences on extremely complex matters from what random people without that knowledge bet on a given day. Its maybe fun entertainment, but it does not say anything about anything.
I do not assign any probabilities. To me it is just silly that whole assigning probabilities game surrounding x-risk and AI safety in general. How can anyone say for instance that it is a 10% risk of human extinction. What does that mean? Is that a 1 in 10 chance at a given moment, during a 23.7678 year period, forever or? And most importantly how do you come up with the figure 10%, based on what exactly?
I think it’s quite unlikely that GPT 5 will destroy the world. That said, I think it’s generally reasonable to doubt prediction markets on questions that can’t be fairly evaluated both ways.
Logan Zoellner, thank you for clarifying the concept.
However, it is possible to argue about semantics but since no one knows when AGI will happen if you increase the compute and or deploy new models, all take offs are equally dangerous. I think a fair stance by all AI researcher and companies trying to get to AGI is to admit that they have zero clue when AGI will be achieved, how that AI will behave and what safety measures are needed that can keep it under control.
Can anyone with certainty say that for instance a 100x in compute and model complexity over the state of the art today does not constitute an AGI? A 100x could be achieved within 2-3 years if someone poured a lot of money into it i.e. if someone went fishing for trillions in venture capital...
We are on a path for takeoff. Brace for impact.
I think that a slow (earlier) takeoff is safer than a fast (later) takeoff.
Let us agree for a moment that GPT-5 is not going to destroy the world.
Suppose aliens were going to arrive at some unknown point in the future. Humanity would obviously be in a better position to defend themselves if everyone on Earth had access to GPT-5 than if they didn’t.
Similarly, for AGI. If AGI arrives and finds itself in a world where most humans already have access to powerful (but not dangerous) AIs, then it is less likely to destroy us all.
As an extreme, consider a world in which all technology was legally frozen at an Amish level of development for several thousand years but nonetheless some small group of people eventually broke the restriction and secretly developed AGI. Such a world would be much more doomed than our own.
Logan Zoellner thank you for further expanding on your thoughts,
No, I will not agree that GPT5 will not destroy the world, cause I have no idea what it will be capable of.
I do not understand you assertion that we would be better fending off aliens if we have access to GPT5 than if we do not. What exactly do you think GPT5 could do in that scenario?
Why do you think that having access to powerful AI’s would make AGI less likely to destroy us?
If anything, I believe that the Amish scenario is less dangerous than the slow take off scenario you described. In the slow take off scenario there will be billions of interconnected semi-smart entities that a full blown AGI could take control over. In the Amish scenario there would be just one large computer somewhere that is really really smart, but that does not have the possibility to hijack billions of devices, robots and other computers to reek havoc.
My point is this. We do not know. Nobody knows. We might create AGI and survive, or we might not survive. There are no priors and everything going forward from now on is just guesswork.
At some model capabilities level (gpt-8?), the overall capabilities will be subhuman, but the model will be able to control robots to do at least 90 percent of the manufacturing and resource gathering steps needed to build more robots. This reduces the cost of building robots by 10 times (ok not actually 10 times but pretending land and IP is free...), increases the total number of robots humans can build by 10 times, and assuming the robots can be ordered to build anything that is similar to the steps needed to build robots (a rocket, a car, a house all have similar techniques to build) , it increases human resiliency.
Any problems humans have, they have 10 times the resources to deal with those problems.*
They have 10 times the resources to make bunkers (future nuclear wars and engineered pandemics), manufacture weapons (future wars), can afford every camera to have a classifier gpu (terrorism and mass shooting), can afford more than 10 times the spacecraft (off planet colonies maybe, more telescopes to see aliens arriving), can produce 10 times as much food, housing, can afford to use solely clean energy for everything, and so on.
You likely objection will be that humans could not maintain control of a bunch of general purpose robotics, an ASI would just “hack in” and take over them all. However, if for the sake of argument, you think there might be a way to secure the equipment using methods that are not hackable, it would help humans a lot.
This also helps humans with rogue AI—it gives them 10 times the resources for monitoring systems and 10 times the military hardware to deal with rebels. The “aliens” case is a superset of the “rogue AI” case, and it’s the same thing—assuming interstellar spacecraft are tiny, humans at least have a fighting chance if they have robots capable of exponential growth and several years of warning.
So it’s possible to come to the conclusion that humans have their best chance of survival “getting strapped” with near future armies of robots, carefully locked down with many layers of cybersecurity and isolation, to survive the challenges that are to come.
And that “pausing everything” or “extremely slow AI progress with massive amounts of review and red tape” will have the outcome of China during the Opium wars, China again in WW2, France during WW2, Ukraine right now, USA automakers during the 80s...
Failing to adopt new weapons and tech I believe has not yet paid off in human history. AI could be the exception to the trend.
Actually more than 10 times, since humans need a constant amount of food, shelter, medicine, and transport. Also if the automation is 99 percent that’s 100 times and so on.
Thank you Gerald Monroe for your comments,
My interpretation of your writing is that we should relentlessly pursue the goal of AGI because it might give us some kind of protection against a future alien invasion of which we have no idea what we are dealing with or will even happen? Yes, the “aliens” could be switched for AGI but it makes the case even stranger to me, that we should develop A(G)I to protect us from AGI.
We could speculate that AGI gives an 10x improvement there and 100x here and so on. But we really do not have any idea. What if AGI is like turning on a light switch, that you from one model to the next get a trillion fold increase in capability, how will the AI safety bots deal with that? We have no idea how to classify intelligence in terms of levels. How much smarter is a human compared to a dog? Or a snake? Or a chimpanzee? Assume for the sake of argument that a human is twice as “smart” as a chimpanzee on some crude brain measure scale thingy. Are humans than twice as capable than chimpanzees? We are probably close to infinitely more capable even if the raw brain power is NOT millions or billions or trillions times that of a chimpanzee.
We just do not have any idea what just a “slightly smarter” thing than us is capable of doing, it could be just a tiny bit better than us or it could be close to infinitely better than us.
You misunderstood. Let me quote myself:
This is not an AGI, but a general machine that can do many things that GPT-4 can’t do, and almost certainly GPT-5 cannot do either. The reason we should pursue the goal is to have tools against all future dangers. Not exotic ones like aliens but :
You then said:
I don’t think 90% automation leading to approximately 10x available resources is speculation, it’s reasoning based on reducing the limiting input (human labor). It is also not speculation to say that it is possible to build a machine that can increase automation by 90%. Humans can do all of the steps, and current research shows that reaching a model that can do 90% of the tasks at the skill level of a median factory worker or technician is relatively near term. “GPT-8” assumes about 4 generations more, or 8-12 years of time.
You then said, with some detail paragraphs that seem to say the same thing :
I am talking about something “slightly dumber” than us. And we already do have an idea—GPT-4 is slightly smarter than any living human in some domains. It gets stuck on any practical real world task at the moment.
Gerald Monroe, thank you for expanding your previous comments.
You propose building these sub-human machines in order to protect humanity from anything like nuclear war to street violence. But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which “we” apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI’s that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the “elites”.
But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from?
One response covers both paragraphs:
It’s a fight for survival like it always was. That’s exactly right, there will be other humans armed with new weapons made possible with ai, terrorists will be better equipped, rival nations get killer drones, and so on.
I know with pretty high confidence—coming from the sum of all human history—that you cannot “regulate” away dangers. All you are doing is disarming yourself. You have to “get strapped” with more and better weapons. (One famous example is https://www.reed.edu/reed_magazine/june2016/articles/features/gunpowder.html )
Might those new weapons turn on you? Yes. Human survival was never guaranteed. But from humans harnessing fire to coal to firearms to nukes, it has so far always paid off for those who adopted the new tools faster and better than their rivals.
Reasonable regulations like no nuclear reactors in private homes are fine, but it has to be possible to innovate.
Gerald Monroe thank you again clarifying you thoughts,
When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
I don’t. Here’s the way the reasoning works. All intelligence of any kind by anyone is saying “ok this thing I have is most similar to <these previous things I have observed, or people long dead observed and they wrote down>”.
So an advanced robotic system, or an AI I can tell to do something, is kinda like [a large dataset of past examples of technology]. Therefore even if the class match is not exact, it is still strong evidence that it will have properties shared in common with all past instances.
Specifically for tools/weapons, that dataset goes from [chipped rocks as hand axes to ICBMS to killer drones]. So far it has paid off to be up to date in quantity and quality of weapons technology.
What makes me “confident ” is those data sets are real. They aren’t some argument someone cooked up online. British warships really did rule the seas. The cold war was not fought because of ICBMs.
Centuries of history and billions of people were involved. While advanced AI doesn’t exist yet and we don’t know how far intelligence scales.
So it is more likely the past history is true than “an author of Harry Potter fanfiction and a few hundred scientist speculating about the future” are correct.
Note it’s all about likelihood. If one side of a position has overwhelming evidence, you can still be wrong [example: investing in crypto]. Lesswrong though implies that’s what this site/culture is about.
The confidence comes from the sheer number of examples.
Again though, you’re right. Maybe AI is different. But you can’t be very confident it is different without evidence of the empirical kind.
Btw I was wrong about crypto and the COVID lockdowns were a surprise because the closest historical match, 1922 flu epidemic, did not have wfh.
Nevertheless I think given the prior evidence, my assumptions were a correct EV assessment.
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.
So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Well for one thing, because you don’t have a choice in a competitive environment. Any software/hardware company in the Bay Area that doesn’t adopt AI (at least to reduce developer cost) will go broke. At a national scale, any power bloc that doesn’t develop weapons using AI will be invaded and their governments deposed. And it has historically not been effective to try to negotiate agreements not to build advanced weapons. It frankly doesn’t appear to have ever successfully happened.
See here: https://en.wikipedia.org/wiki/Washington_Naval_Treaty
Page has been edited but a summary of the outcome is:
Later arms control agreements such as SALT leave large enough nuclear arsenals to effectively still be MAD (~4000 nuclear warheads on each side). And agreements on chemical and biological weapons were violated openly and privately until the superpowers determined, after decades of R&D, that they weren’t worth the cost.
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.
Great, this appears to be an empirical question that we disagree on!
I (and the Manifold prediction market I linked) think there is a tiny chance that GPT-5 will destroy the world.
You appear to disagree. I hope you are buying “yes” shares on Manifold and that one of us can “update our priors” a year from now when GPT-5 has/has-not destroyed the world.
Logan Zoellner thank you for highlighting one of your previous points,
You asked me to agree to your speculation that GPT5 will not destroy the world. I will not agree with your speculation because I have no idea if GPT5 will do that or not. This does not mean that I agree with the statement that GPT5 WILL destroy earth. It just means that I do not know.
I would not use Manifold as any data point in assessing the potential danger of future AI.
What would you use instead?
In particular, I’d be interested in knowing what probability you assign to the chance that GPT-5 will destroy the world and how you arrived at that probability.
Logan Zoellner thank you for your question,
In my view we need more research, not people that draw inferences on extremely complex matters from what random people without that knowledge bet on a given day. Its maybe fun entertainment, but it does not say anything about anything.
I do not assign any probabilities. To me it is just silly that whole assigning probabilities game surrounding x-risk and AI safety in general. How can anyone say for instance that it is a 10% risk of human extinction. What does that mean? Is that a 1 in 10 chance at a given moment, during a 23.7678 year period, forever or? And most importantly how do you come up with the figure 10%, based on what exactly?
I think it’s quite unlikely that GPT 5 will destroy the world. That said, I think it’s generally reasonable to doubt prediction markets on questions that can’t be fairly evaluated both ways.
compared to what?