When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?
I don’t. Here’s the way the reasoning works. All intelligence of any kind by anyone is saying “ok this thing I have is most similar to <these previous things I have observed, or people long dead observed and they wrote down>”.
So an advanced robotic system, or an AI I can tell to do something, is kinda like [a large dataset of past examples of technology]. Therefore even if the class match is not exact, it is still strong evidence that it will have properties shared in common with all past instances.
Specifically for tools/weapons, that dataset goes from [chipped rocks as hand axes to ICBMS to killer drones]. So far it has paid off to be up to date in quantity and quality of weapons technology.
What makes me “confident ” is those data sets are real. They aren’t some argument someone cooked up online. British warships really did rule the seas. The cold war was not fought because of ICBMs.
Centuries of history and billions of people were involved. While advanced AI doesn’t exist yet and we don’t know how far intelligence scales.
So it is more likely the past history is true than “an author of Harry Potter fanfiction and a few hundred scientist speculating about the future” are correct.
Note it’s all about likelihood. If one side of a position has overwhelming evidence, you can still be wrong [example: investing in crypto]. Lesswrong though implies that’s what this site/culture is about.
The confidence comes from the sheer number of examples.
Again though, you’re right. Maybe AI is different. But you can’t be very confident it is different without evidence of the empirical kind.
Btw I was wrong about crypto and the COVID lockdowns were a surprise because the closest historical match, 1922 flu epidemic, did not have wfh.
Nevertheless I think given the prior evidence, my assumptions were a correct EV assessment.
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.
So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Well for one thing, because you don’t have a choice in a competitive environment. Any software/hardware company in the Bay Area that doesn’t adopt AI (at least to reduce developer cost) will go broke. At a national scale, any power bloc that doesn’t develop weapons using AI will be invaded and their governments deposed. And it has historically not been effective to try to negotiate agreements not to build advanced weapons. It frankly doesn’t appear to have ever successfully happened.
Page has been edited but a summary of the outcome is:
The United States developed better technology to get better performance from their ships while still working within the weight limits, the United Kingdom exploited a loop-hole in the terms, the Italians misrepresented the weight of their vessels, and when up against the limits, Japan left the treaty. The nations which violated the terms of the treaty did not suffer great consequences for their actions. Within little more than a decade, the treaty was abandoned.
Later arms control agreements such as SALT leave large enough nuclear arsenals to effectively still be MAD (~4000 nuclear warheads on each side). And agreements on chemical and biological weapons were violated openly and privately until the superpowers determined, after decades of R&D, that they weren’t worth the cost.
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.
I don’t. Here’s the way the reasoning works. All intelligence of any kind by anyone is saying “ok this thing I have is most similar to <these previous things I have observed, or people long dead observed and they wrote down>”.
So an advanced robotic system, or an AI I can tell to do something, is kinda like [a large dataset of past examples of technology]. Therefore even if the class match is not exact, it is still strong evidence that it will have properties shared in common with all past instances.
Specifically for tools/weapons, that dataset goes from [chipped rocks as hand axes to ICBMS to killer drones]. So far it has paid off to be up to date in quantity and quality of weapons technology.
What makes me “confident ” is those data sets are real. They aren’t some argument someone cooked up online. British warships really did rule the seas. The cold war was not fought because of ICBMs.
Centuries of history and billions of people were involved. While advanced AI doesn’t exist yet and we don’t know how far intelligence scales.
So it is more likely the past history is true than “an author of Harry Potter fanfiction and a few hundred scientist speculating about the future” are correct.
Note it’s all about likelihood. If one side of a position has overwhelming evidence, you can still be wrong [example: investing in crypto]. Lesswrong though implies that’s what this site/culture is about.
The confidence comes from the sheer number of examples.
Again though, you’re right. Maybe AI is different. But you can’t be very confident it is different without evidence of the empirical kind.
Btw I was wrong about crypto and the COVID lockdowns were a surprise because the closest historical match, 1922 flu epidemic, did not have wfh.
Nevertheless I think given the prior evidence, my assumptions were a correct EV assessment.
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.
So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Well for one thing, because you don’t have a choice in a competitive environment. Any software/hardware company in the Bay Area that doesn’t adopt AI (at least to reduce developer cost) will go broke. At a national scale, any power bloc that doesn’t develop weapons using AI will be invaded and their governments deposed. And it has historically not been effective to try to negotiate agreements not to build advanced weapons. It frankly doesn’t appear to have ever successfully happened.
See here: https://en.wikipedia.org/wiki/Washington_Naval_Treaty
Page has been edited but a summary of the outcome is:
Later arms control agreements such as SALT leave large enough nuclear arsenals to effectively still be MAD (~4000 nuclear warheads on each side). And agreements on chemical and biological weapons were violated openly and privately until the superpowers determined, after decades of R&D, that they weren’t worth the cost.
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.