Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode. It’s far more likely that SIAI is slower at developing (both Friendly and unFriendly) AI than the rest of the world. It’s quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.
Fear of theft is a crank trope? As someone who makes a living providing cyber security I have to say you have no idea of the daily intrusions US companies experience from foreign governments and just simple criminals.
Theft of higher level more abstract ideas is much rarer. It happens both in Hollywood films and in the real Hollywood, but not so frequently, as far as I can tell, in most industries. More frequently, people can’t get others to follow up on high generality ideas. Apple and Microsoft, for instance, stole ideas from Xerox that Xerox had been sitting on for years, they didn’t steal ideas that Xerox was working on and compete with Xerox.
Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode.
I think it might be correct in the entrepreneur/startup world, but it probably isn’t when it comes to technologies that are this powerful. Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software. If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).
I’m not saying it only applies to the SIAI (though my original post was directed only at them, my question here is about the AGI research world in general, which includes the SIAI), or that it isn’t just one of many many things that can go wrong. But I still think that when you’re playing with stuff this powerful, you should be concerned with security and not just expect to forever fly under the radar.
“Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software.”
The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn’t kept secret by default; if it weren’t for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.
“If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).”
Only if they believe you, which they almost certainly won’t. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there’s still an additional burden of proof on top of that, because they’d also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.
So you’re saying SIAI is deliberately appearing incompetent and far from its goal to avoid being attacked?
ETA: I realize you’re probably not saying it’s doing that already, but you certainly suggest that it’s going to be in SIAI’s best interests going forward.
Let’s be realistic here—the AGI research world is a small fringe element of AI research in general. The AGI research world generally has a high opinion of its own importance—an opinion not generally shared by the AI research world in general, or the world as a whole.
We are in a self-selected group of people who share our beliefs. This will bias our thinking, leading us to be too confident of our shared beliefs. We need to strive to counter that effect and keep a sense of perspective, particularly when we’re trying to anticipate what other people are likely to do.
Either the creation of smarter than human intelligence is the most powerful thing in the world, or it isn’t.
If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.
If that is the case, this probably means that at some point AGI researchers will be “on the radar” of these people and that they should at least think about preparing for that day.
You can’t have your cake and eat it too; you can’t believe that AGI is the most important thing in the world and simultaneously think that it’s so unimportant that nobody’s going to bother with it.
I’m not saying that right now there is much danger of that . But if we can’t predict when AGI is going to happen (which means wide confidence bounds, 5 years to 100 years, as Eliezer once said.), then we don’t know how soon we should start thinking about security, which probably means that as soon as possible is best.
“If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.”
Nobody in powerful organizations realizes important things all the time. To take a case study, in 2001, Warren Buffett and Ted Turner just happened to notice that there were hundreds of nukes in Russia, sitting around in lightly guarded or unguarded facilities, which anyone with a few AK-47s and a truck could have just walked in and stolen. They had to start their own organization, called the Nuclear Threat Initiative, to take care of the problem, because no one else was doing anything.
The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.
Yes, it is. How could examples of X not be evidence that the “norm is X”? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainlyplentymore.
Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you’re examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.
Important things that weren’t recognized by the wider populace as important things? Do you have citations? Even for much more mundane things, governments routinely fail to either notice them, or to act once they have noticed. Eg., Chamberlain didn’t notice that Hitler wanted total control of Europe, even though he said so in his publicly-available book Mein Kampf. Stalin didn’t notice that Hitler was about to invade, even though he had numerous warnings from his subordinates.
The reason to point out the crackpot aspect (e.g. item 12 in Baez’s Crackpot Index) is to adjust how people think about this question, not to argue that the question shouldn’t be asked or answered.
In particular, I want people to balance (at least) two dangers—the danger of idea-stealing and the danger of insularity slowing down innovation.
“It’s quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.”
I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren’t crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it’s the fate of the entire planet instead of a few million dollars for personal use.
There is a strong fundamental streak in the subproblem of clear conceptual understanding of FAI (how the whole real world looks like for an algorithm, which is important both for the decision-making algorithm, and for communication of values), that I find closely related to a lot of fundamental stuff that both physicists and mathematicians are trying to crack for a long time, but haven’t yet. This suggests that the problem is not a low-hanging fruit. My current hope is merely to articulate a connection between FAI and this stuff.
“I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren’t crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it’s the fate of the entire planet instead of a few million dollars for personal use.”
I don’t think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.
As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.
I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...
Of course startups sometimes lose; they certainly aren’t invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998.
“If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.”
Nick Bostrom (http://www.nickbostrom.com/cv.pdf)
Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever)
A couple people who’s names I won’t mention since I doubt you’d know them from Johns Hopkins Applied Physics Lab where I did some work.
etc.
I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.
My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you’re smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)
No, because I don’t believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.
Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?
Eliezer invoked the notion of intelligent insanity in response to Aumann’s approach to the absent-minded driver problem. In this case, what was Aumann efficiently optimizing in spite of his own wishes?
“Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)”
Couldn’t have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.
I think accomplishments are a better measure (quality over quantity obviously)
I once came third in a marathon. How smart am I? If I increase my mileage to a level that would be required for me to come first would that make me smarter? Does the same apply when I’m trying to walk in 40 years?
ETA: I thought I cancelled this one. Nevermind, I stand by my point. Achievement is the best predictor of future achievement. It isn’t a particularly good measure of intelligence. Achievement shows far more about what kind of things someone is inclined to achieve (and signal) as well as how well they are able to motivate themselves than it does about intelligence (see, for example, every second page here). Accomplishments are better measures than IQ, but they are not a measure of intelligence at all.
I agree that both Bostrom and Wolfram are very smart, but this does not a convincing case make. Even someone at 99.9999th percentile intelligence will have 6,800 people who are as smart or smarter than they are.
Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode. It’s far more likely that SIAI is slower at developing (both Friendly and unFriendly) AI than the rest of the world. It’s quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.
Fear of theft is a crank trope? As someone who makes a living providing cyber security I have to say you have no idea of the daily intrusions US companies experience from foreign governments and just simple criminals.
Theft of higher level more abstract ideas is much rarer. It happens both in Hollywood films and in the real Hollywood, but not so frequently, as far as I can tell, in most industries. More frequently, people can’t get others to follow up on high generality ideas. Apple and Microsoft, for instance, stole ideas from Xerox that Xerox had been sitting on for years, they didn’t steal ideas that Xerox was working on and compete with Xerox.
Indeed, but my point is that AGI isn’t a film or normal piece of software.
The cost vs benefits analysis of would be thieves would look at lot different.
Fear by amateur researchers of theft is a crank trope.
I think it might be correct in the entrepreneur/startup world, but it probably isn’t when it comes to technologies that are this powerful. Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software. If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).
I’m not saying it only applies to the SIAI (though my original post was directed only at them, my question here is about the AGI research world in general, which includes the SIAI), or that it isn’t just one of many many things that can go wrong. But I still think that when you’re playing with stuff this powerful, you should be concerned with security and not just expect to forever fly under the radar.
“Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software.”
The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn’t kept secret by default; if it weren’t for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.
“If you’re building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).”
Only if they believe you, which they almost certainly won’t. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there’s still an additional burden of proof on top of that, because they’d also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.
So you’re saying SIAI is deliberately appearing incompetent and far from its goal to avoid being attacked?
ETA: I realize you’re probably not saying it’s doing that already, but you certainly suggest that it’s going to be in SIAI’s best interests going forward.
Let’s be realistic here—the AGI research world is a small fringe element of AI research in general. The AGI research world generally has a high opinion of its own importance—an opinion not generally shared by the AI research world in general, or the world as a whole.
We are in a self-selected group of people who share our beliefs. This will bias our thinking, leading us to be too confident of our shared beliefs. We need to strive to counter that effect and keep a sense of perspective, particularly when we’re trying to anticipate what other people are likely to do.
I’m not sure I get what you’re saying.
Either the creation of smarter than human intelligence is the most powerful thing in the world, or it isn’t.
If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.
If that is the case, this probably means that at some point AGI researchers will be “on the radar” of these people and that they should at least think about preparing for that day.
You can’t have your cake and eat it too; you can’t believe that AGI is the most important thing in the world and simultaneously think that it’s so unimportant that nobody’s going to bother with it.
I’m not saying that right now there is much danger of that . But if we can’t predict when AGI is going to happen (which means wide confidence bounds, 5 years to 100 years, as Eliezer once said.), then we don’t know how soon we should start thinking about security, which probably means that as soon as possible is best.
“If it is, it would be surprising if nobody in the powerful organizations I’m talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.”
Nobody in powerful organizations realizes important things all the time. To take a case study, in 2001, Warren Buffett and Ted Turner just happened to notice that there were hundreds of nukes in Russia, sitting around in lightly guarded or unguarded facilities, which anyone with a few AK-47s and a truck could have just walked in and stolen. They had to start their own organization, called the Nuclear Threat Initiative, to take care of the problem, because no one else was doing anything.
The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.
Yes, it is. How could examples of X not be evidence that the “norm is X”? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainly plenty more.
Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you’re examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.
Important things that weren’t recognized by the wider populace as important things? Do you have citations? Even for much more mundane things, governments routinely fail to either notice them, or to act once they have noticed. Eg., Chamberlain didn’t notice that Hitler wanted total control of Europe, even though he said so in his publicly-available book Mein Kampf. Stalin didn’t notice that Hitler was about to invade, even though he had numerous warnings from his subordinates.
The reason to point out the crackpot aspect (e.g. item 12 in Baez’s Crackpot Index) is to adjust how people think about this question, not to argue that the question shouldn’t be asked or answered.
In particular, I want people to balance (at least) two dangers—the danger of idea-stealing and the danger of insularity slowing down innovation.
It’s a marketing strategy by those involved. I am among those who are sceptical. Generality is implicit in the definition of “intelligence”.
“It’s quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.”
I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren’t crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it’s the fate of the entire planet instead of a few million dollars for personal use.
There is a strong fundamental streak in the subproblem of clear conceptual understanding of FAI (how the whole real world looks like for an algorithm, which is important both for the decision-making algorithm, and for communication of values), that I find closely related to a lot of fundamental stuff that both physicists and mathematicians are trying to crack for a long time, but haven’t yet. This suggests that the problem is not a low-hanging fruit. My current hope is merely to articulate a connection between FAI and this stuff.
“I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren’t crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it’s the fate of the entire planet instead of a few million dollars for personal use.”
I don’t think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.
As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.
I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...
Of course startups sometimes lose; they certainly aren’t invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998.
“If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.”
(citation needed)
Ok, here are some people:
Nick Bostrom (http://www.nickbostrom.com/cv.pdf) Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever) A couple people who’s names I won’t mention since I doubt you’d know them from Johns Hopkins Applied Physics Lab where I did some work. etc.
I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.
My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you’re smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)
Sorry this is harsh but there it is.
I think you have confused “smart” with “accomplished”, or perhaps “possessed of a suitably impressive resumé”.
No, because I don’t believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.
What do you think “intelligence” is?
Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?
Previously, Eliezer has said that intelligence is efficient optimization.
I have trouble meshing this definition with the concept of intelligent insanity.
Intelligently insane efficiently optimize stuff in the way they don’t want it optimized.
Eliezer invoked the notion of intelligent insanity in response to Aumann’s approach to the absent-minded driver problem. In this case, what was Aumann efficiently optimizing in spite of his own wishes?
“Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)”
Couldn’t have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.
I once came third in a marathon. How smart am I? If I increase my mileage to a level that would be required for me to come first would that make me smarter? Does the same apply when I’m trying to walk in 40 years?
ETA: I thought I cancelled this one. Nevermind, I stand by my point. Achievement is the best predictor of future achievement. It isn’t a particularly good measure of intelligence. Achievement shows far more about what kind of things someone is inclined to achieve (and signal) as well as how well they are able to motivate themselves than it does about intelligence (see, for example, every second page here). Accomplishments are better measures than IQ, but they are not a measure of intelligence at all.
I agree that both Bostrom and Wolfram are very smart, but this does not a convincing case make. Even someone at 99.9999th percentile intelligence will have 6,800 people who are as smart or smarter than they are.