For something you give to journalists, the bullet points are the most important part, as they (the journalists) will likely get their impression from them and probably ignore the rest to spin their narrative the way they like it. What you currently call the “Executive summary” isn’t. You have to fit your points into a 30-second pitch. 2 pages are 1.75 pages too long. It will be treated as optional supplementary material by the non-experts. With this in mind, I recommend skipping the first point:
The risks of artificial intelligence are strongly tied with the AI’s intelligence.
as unclear and so low-impact.
The second one:
There are reasons to suspect a true AI could become extremely smart and powerful.
is too mildly worded. “Reasons to suspect” and “could” do not call to act. “True AI” as opposed to what? False AI? Fake AI? If MIRI believes that unchecked organic AI development will almost inevitably lead to smarter-than-us artificial intelligence levels, this needs to be stated right there in the first line.
Possibly replace your first two points with something like
By all indications, Artificial Intelligence could some day exceed human intelligence.
History has proven that expecting to control or to even understand someone or something that is significantly smarter than you are is a hopeless task.
The rest of the bullet points look good to me. I would still recommend running your summary by a sympathetic but critical outsider before publishing it.
Possibly replace your first two points with something like (1) By all indications, Artificial Intelligence could some day exceed human intelligence. (2) History has proven that expecting to control or to even understand someone or something that is significantly smarter than you are is a hopeless task.
The first point is hung on the horns of the dilemma of implausible vs. don’t care. If you claim that the AI is certain to be much smarter than humans, the claim is implausible. If you make the plausible claim that the AI “could some day”, then it’s not scary at all and gets downgraded to a imaginary threats in far future.
The second point is iffy. I am not sure what do you actually mean and have counterexamples: for example the Soviet political elite (which was, by all accounts, far from being geniuses) had little trouble controlling the Soviet scientists, e.g. physicists who were very much world-class.
First point: many barriers people stated an AI would never break have fallen (e.g. chess, Jeopardy!, even Go). Many still remain, but few claim anymore that these barriers are untouchable. Second point: take any political or economic example. Smarts and cunning tend to win (e.g. politics: Stalin, Pinochet, Mugabe).
many barriers people stated an AI would never break have fallen
We are talking about the level of mainstream media, right? I doubt they have a good grasp of the progress in the AI field and superpowerful AIs are still associated with action movies.
Smarts and cunning tend to win (e.g. politics: Stalin, Pinochet, Mugabe)
Huh? Ruthlessness and cunning tend to win. Being dumb is, of course, a disadvantage (though it could be overcome: see Idi Amin Dada), but I am not aware that Stalin, Pinochet, or Mugabe were particularly smart.
Smarts and cunning tend to win (e.g. politics: Stalin, Pinochet, Mugabe).
..., Bush, oh wait! Anyway, even if successful politicians tend to be smart (with some exceptions) it doesn’t imply that being smart is the primary property that determines political success. How many smart wannabe politicians are unsuccessful?
It is worth recalling that Bush is estimated at 95th percentile when it comes to intelligence. Not the smartest man in the country by a long shot, but not so far down the totem pole either.
(It would be interesting to look at the percentiles politicians have in various dimensions- intelligence, height, beauty, verbal ability, wealth, etc.- and see how the distributions differ. This isn’t the Olympics, where you’re selecting on one specific trait, but an aggregation- and it may be that intelligence does play a large role in that aggregation.)
It is worth recalling that Bush is estimated at 95th percentile when it comes to intelligence. Not the smartest man in the country by a long shot, but not so far down the totem pole either.
(It would be interesting to look at the percentiles politicians have in various dimensions- intelligence, height, beauty, verbal ability, wealth, etc.- and see how the distributions differ. This isn’t the Olympics, where you’re selecting on one specific trait, but an aggregation- and it may be that intelligence does play a large role in that aggregation.)
I recall reading somewhere that somebody bothered to check and it turned out that height and physical looks correlate with political success. Which is not surpising considering the Halo effect.
Bush was very smart as a politician. He made rhetorical “mistakes”, but never ones that would penalise him with his core constituency, for example. At getting himself elected, he was most excellent.
While monte carlo search helps, Go AIs still must be given the maximum handicap to be competitive with high level players, and still consistently lose. Although the modern go algorithms are highly parallel, so maybe its just an issue of getting larger clusters.
Also, I wonder if these sorts of examples cause people to downgrade in risk- its hard to imagine how a program that plays Go incredibly well poses any threat.
It looks like I’m out of date. A search revealed that a few Go bots have risen to 4-dan on various online servers, which implies a 4 or 5 stone handicap against a top level player (9 dan).
Found what I was thinking of: here are some game records of a human 9 dan playing two computers with 4 stone handicaps, winning to one and losing to the other (and you can also see a record of the two computers playing each other).
For something you give to journalists, the bullet points are the most important part, as they (the journalists) will likely get their impression from them and probably ignore the rest to spin their narrative the way they like it. What you currently call the “Executive summary” isn’t. You have to fit your points into a 30-second pitch. 2 pages are 1.75 pages too long. It will be treated as optional supplementary material by the non-experts. With this in mind, I recommend skipping the first point:
as unclear and so low-impact.
The second one:
is too mildly worded. “Reasons to suspect” and “could” do not call to act. “True AI” as opposed to what? False AI? Fake AI? If MIRI believes that unchecked organic AI development will almost inevitably lead to smarter-than-us artificial intelligence levels, this needs to be stated right there in the first line.
Possibly replace your first two points with something like
By all indications, Artificial Intelligence could some day exceed human intelligence.
History has proven that expecting to control or to even understand someone or something that is significantly smarter than you are is a hopeless task.
The rest of the bullet points look good to me. I would still recommend running your summary by a sympathetic but critical outsider before publishing it.
Cheers for those points!
Do you know of someone in particular? We seem to have browbeaten most of the locals to our point of view...
No, but maybe cold-call/email some tech-savvy newsies, like this bloke from El Reg.
Thanks!
The first point is hung on the horns of the dilemma of implausible vs. don’t care. If you claim that the AI is certain to be much smarter than humans, the claim is implausible. If you make the plausible claim that the AI “could some day”, then it’s not scary at all and gets downgraded to a imaginary threats in far future.
The second point is iffy. I am not sure what do you actually mean and have counterexamples: for example the Soviet political elite (which was, by all accounts, far from being geniuses) had little trouble controlling the Soviet scientists, e.g. physicists who were very much world-class.
First point: many barriers people stated an AI would never break have fallen (e.g. chess, Jeopardy!, even Go). Many still remain, but few claim anymore that these barriers are untouchable. Second point: take any political or economic example. Smarts and cunning tend to win (e.g. politics: Stalin, Pinochet, Mugabe).
We are talking about the level of mainstream media, right? I doubt they have a good grasp of the progress in the AI field and superpowerful AIs are still associated with action movies.
Huh? Ruthlessness and cunning tend to win. Being dumb is, of course, a disadvantage (though it could be overcome: see Idi Amin Dada), but I am not aware that Stalin, Pinochet, or Mugabe were particularly smart.
He’s certainly bookish.
..., Bush, oh wait!
Anyway, even if successful politicians tend to be smart (with some exceptions) it doesn’t imply that being smart is the primary property that determines political success. How many smart wannabe politicians are unsuccessful?
It is worth recalling that Bush is estimated at 95th percentile when it comes to intelligence. Not the smartest man in the country by a long shot, but not so far down the totem pole either.
(It would be interesting to look at the percentiles politicians have in various dimensions- intelligence, height, beauty, verbal ability, wealth, etc.- and see how the distributions differ. This isn’t the Olympics, where you’re selecting on one specific trait, but an aggregation- and it may be that intelligence does play a large role in that aggregation.)
Found this: http://en.wikipedia.org/wiki/U.S._Presidential_IQ_hoax#IQ_estimations_by_academics
I suppose my perception of him was biased, and I’m not even American! Well, time to update...
I recall reading somewhere that somebody bothered to check and it turned out that height and physical looks correlate with political success. Which is not surpising considering the Halo effect.
Bush was very smart as a politician. He made rhetorical “mistakes”, but never ones that would penalise him with his core constituency, for example. At getting himself elected, he was most excellent.
Isn’t this a circular argument? Bush was smart because he was elected and only smart people get elected.
No, this was my assessment of his performance in debates and on the campaign trail.
While monte carlo search helps, Go AIs still must be given the maximum handicap to be competitive with high level players, and still consistently lose. Although the modern go algorithms are highly parallel, so maybe its just an issue of getting larger clusters.
Also, I wonder if these sorts of examples cause people to downgrade in risk- its hard to imagine how a program that plays Go incredibly well poses any threat.
I saw some 4 stone games against experts recently, and thought that 8 stones was the maximum normal handicap.
It looks like I’m out of date. A search revealed that a few Go bots have risen to 4-dan on various online servers, which implies a 4 or 5 stone handicap against a top level player (9 dan).
Found what I was thinking of: here are some game records of a human 9 dan playing two computers with 4 stone handicaps, winning to one and losing to the other (and you can also see a record of the two computers playing each other).