My interpretation of your writing is that we should relentlessly pursue the goal of AGI because it might give us some kind of protection against a future alien invasion of which we have no idea what we are dealing with or will even happen? Yes, the “aliens” could be switched for AGI but it makes the case even stranger to me, that we should develop A(G)I to protect us from AGI.
You misunderstood. Let me quote myself:
At some model capabilities level (gpt-8?), the overall capabilities will be subhuman, but the model will be able to control robots to do at least 90 percent of the manufacturing and resource gathering steps needed to build more robots.
This is not an AGI, but a general machine that can do many things that GPT-4 can’t do, and almost certainly GPT-5 cannot do either. The reason we should pursue the goal is to have tools against allfuture dangers. Not exotic ones like aliens but :
They have 10 times the resources to make bunkers (future nuclear wars and engineered pandemics), manufacture weapons (future wars), can afford every camera to have a classifier gpu (terrorism and mass shooting), can afford more than 10 times the spacecraft (off planet colonies maybe, more telescopes to see aliens arriving), can produce 10 times as much food, housing, can afford to use solely clean energy for everything, and so on.
You then said:
We could speculate that AGI gives an 10x improvement there and 100x here and so on. But we really do not have any idea.
I don’t think 90% automation leading to approximately 10x available resources is speculation, it’s reasoning based on reducing the limiting input (human labor). It is also not speculation to say that it is possible to build a machine that can increase automation by 90%. Humans can do all of the steps, and current research shows that reaching a model that can do 90% of the tasks at the skill level of a median factory worker or technician is relatively near term. “GPT-8” assumes about 4 generations more, or 8-12 years of time.
You then said, with some detail paragraphs that seem to say the same thing :
We just do not have any idea what just a “slightly smarter” thing than us is capable of doing, it could be just a tiny bit better than us or it could be close to infinitely better than us.
I am talking about something “slightly dumber” than us. And we already do have an idea—GPT-4 is slightly smarter than any living human in some domains. It gets stuck on any practical real world task at the moment.
Gerald Monroe, thank you for expanding your previous comments.
You propose building these sub-human machines in order to protect humanity from anything like nuclear war to street violence. But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which “we” apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI’s that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the “elites”.
But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from?
But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which “we” apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI’s that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the “elites”.
But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from?
One response covers both paragraphs:
It’s a fight for survival like it always was. That’s exactly right, there will be other humans armed with new weapons made possible with ai, terrorists will be better equipped, rival nations get killer drones, and so on.
Might those new weapons turn on you? Yes. Human survival was never guaranteed. But from humans harnessing fire to coal to firearms to nukes, it has so far always paid off for those who adopted the new tools faster and better than their rivals.
Reasonable regulations like no nuclear reactors in private homes are fine, but it has to be possible to innovate.
Gerald Monroe thank you again clarifying you thoughts,
When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
I don’t. Here’s the way the reasoning works. All intelligence of any kind by anyone is saying “ok this thing I have is most similar to <these previous things I have observed, or people long dead observed and they wrote down>”.
So an advanced robotic system, or an AI I can tell to do something, is kinda like [a large dataset of past examples of technology]. Therefore even if the class match is not exact, it is still strong evidence that it will have properties shared in common with all past instances.
Specifically for tools/weapons, that dataset goes from [chipped rocks as hand axes to ICBMS to killer drones]. So far it has paid off to be up to date in quantity and quality of weapons technology.
What makes me “confident ” is those data sets are real. They aren’t some argument someone cooked up online. British warships really did rule the seas. The cold war was not fought because of ICBMs.
Centuries of history and billions of people were involved. While advanced AI doesn’t exist yet and we don’t know how far intelligence scales.
So it is more likely the past history is true than “an author of Harry Potter fanfiction and a few hundred scientist speculating about the future” are correct.
Note it’s all about likelihood. If one side of a position has overwhelming evidence, you can still be wrong [example: investing in crypto]. Lesswrong though implies that’s what this site/culture is about.
The confidence comes from the sheer number of examples.
Again though, you’re right. Maybe AI is different. But you can’t be very confident it is different without evidence of the empirical kind.
Btw I was wrong about crypto and the COVID lockdowns were a surprise because the closest historical match, 1922 flu epidemic, did not have wfh.
Nevertheless I think given the prior evidence, my assumptions were a correct EV assessment.
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.
So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Well for one thing, because you don’t have a choice in a competitive environment. Any software/hardware company in the Bay Area that doesn’t adopt AI (at least to reduce developer cost) will go broke. At a national scale, any power bloc that doesn’t develop weapons using AI will be invaded and their governments deposed. And it has historically not been effective to try to negotiate agreements not to build advanced weapons. It frankly doesn’t appear to have ever successfully happened.
Page has been edited but a summary of the outcome is:
The United States developed better technology to get better performance from their ships while still working within the weight limits, the United Kingdom exploited a loop-hole in the terms, the Italians misrepresented the weight of their vessels, and when up against the limits, Japan left the treaty. The nations which violated the terms of the treaty did not suffer great consequences for their actions. Within little more than a decade, the treaty was abandoned.
Later arms control agreements such as SALT leave large enough nuclear arsenals to effectively still be MAD (~4000 nuclear warheads on each side). And agreements on chemical and biological weapons were violated openly and privately until the superpowers determined, after decades of R&D, that they weren’t worth the cost.
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.
You misunderstood. Let me quote myself:
This is not an AGI, but a general machine that can do many things that GPT-4 can’t do, and almost certainly GPT-5 cannot do either. The reason we should pursue the goal is to have tools against all future dangers. Not exotic ones like aliens but :
You then said:
I don’t think 90% automation leading to approximately 10x available resources is speculation, it’s reasoning based on reducing the limiting input (human labor). It is also not speculation to say that it is possible to build a machine that can increase automation by 90%. Humans can do all of the steps, and current research shows that reaching a model that can do 90% of the tasks at the skill level of a median factory worker or technician is relatively near term. “GPT-8” assumes about 4 generations more, or 8-12 years of time.
You then said, with some detail paragraphs that seem to say the same thing :
I am talking about something “slightly dumber” than us. And we already do have an idea—GPT-4 is slightly smarter than any living human in some domains. It gets stuck on any practical real world task at the moment.
Gerald Monroe, thank you for expanding your previous comments.
You propose building these sub-human machines in order to protect humanity from anything like nuclear war to street violence. But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which “we” apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI’s that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the “elites”.
But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from?
One response covers both paragraphs:
It’s a fight for survival like it always was. That’s exactly right, there will be other humans armed with new weapons made possible with ai, terrorists will be better equipped, rival nations get killer drones, and so on.
I know with pretty high confidence—coming from the sum of all human history—that you cannot “regulate” away dangers. All you are doing is disarming yourself. You have to “get strapped” with more and better weapons. (One famous example is https://www.reed.edu/reed_magazine/june2016/articles/features/gunpowder.html )
Might those new weapons turn on you? Yes. Human survival was never guaranteed. But from humans harnessing fire to coal to firearms to nukes, it has so far always paid off for those who adopted the new tools faster and better than their rivals.
Reasonable regulations like no nuclear reactors in private homes are fine, but it has to be possible to innovate.
Gerald Monroe thank you again clarifying you thoughts,
When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
I don’t. Here’s the way the reasoning works. All intelligence of any kind by anyone is saying “ok this thing I have is most similar to <these previous things I have observed, or people long dead observed and they wrote down>”.
So an advanced robotic system, or an AI I can tell to do something, is kinda like [a large dataset of past examples of technology]. Therefore even if the class match is not exact, it is still strong evidence that it will have properties shared in common with all past instances.
Specifically for tools/weapons, that dataset goes from [chipped rocks as hand axes to ICBMS to killer drones]. So far it has paid off to be up to date in quantity and quality of weapons technology.
What makes me “confident ” is those data sets are real. They aren’t some argument someone cooked up online. British warships really did rule the seas. The cold war was not fought because of ICBMs.
Centuries of history and billions of people were involved. While advanced AI doesn’t exist yet and we don’t know how far intelligence scales.
So it is more likely the past history is true than “an author of Harry Potter fanfiction and a few hundred scientist speculating about the future” are correct.
Note it’s all about likelihood. If one side of a position has overwhelming evidence, you can still be wrong [example: investing in crypto]. Lesswrong though implies that’s what this site/culture is about.
The confidence comes from the sheer number of examples.
Again though, you’re right. Maybe AI is different. But you can’t be very confident it is different without evidence of the empirical kind.
Btw I was wrong about crypto and the COVID lockdowns were a surprise because the closest historical match, 1922 flu epidemic, did not have wfh.
Nevertheless I think given the prior evidence, my assumptions were a correct EV assessment.
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.
So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Well for one thing, because you don’t have a choice in a competitive environment. Any software/hardware company in the Bay Area that doesn’t adopt AI (at least to reduce developer cost) will go broke. At a national scale, any power bloc that doesn’t develop weapons using AI will be invaded and their governments deposed. And it has historically not been effective to try to negotiate agreements not to build advanced weapons. It frankly doesn’t appear to have ever successfully happened.
See here: https://en.wikipedia.org/wiki/Washington_Naval_Treaty
Page has been edited but a summary of the outcome is:
Later arms control agreements such as SALT leave large enough nuclear arsenals to effectively still be MAD (~4000 nuclear warheads on each side). And agreements on chemical and biological weapons were violated openly and privately until the superpowers determined, after decades of R&D, that they weren’t worth the cost.
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.