This scenario is a caricature of Bostrom’s argument, because I am not trying to convince you of it, but vaccinate you against it.
A+ rationality
On the one hand, thanks for linking this, because it helps to know what memes are in the water. But on the other hand, what a depressing counterargument to go up against, because of how bad it is.
Attention conservation notice: I haven’t read all of this, but it seems like basically a gish gallop. I don’t recommend reading it if you’re looking for intelligent critiques of AI risk. I’m downvoting this post because replying to gish gallops takes time and effort but doesn’t particularly lead to new insight, and I’d prefer LW not be a place where we do that.
(edit: oh right, downvoting is disabled. At any rate, I would downvote it if I could.)
(The above is not a reply to the post. The below is a brief one.)
From about half-way through, it seems like a lot of the arguments are «we worry that AI will do this, but humans don’t do it, so AI might not do it either.» Not arguing that AI is not a threat, just that there exist plausible-on-the-face-of-it instantiations of AI that are not threats.
And we also have «getting AI right will be really hard», which, uh, yes that is exactly the point.
This article just went viral recently. Here’s more discussion on Hacker News, Metafilter, BoingBoing. It might be worthwhile for someone like Scott Alexander to write a semi-official response.
Also there are Google Alert type tools that could make it easier to be one of the first few commenters, if this happens again. (Google Alerts itself doesn’t work super well, but there are competitors.)
It’s somewhat ironic that the talk starts with the example of nuclear bombs, which is almost a perfect counterargument to the rest of the talk. Imagine the talk being set in the 1930s:
“Nuclear fission is new technology that only a few very intelligent scientists think is possible, one that can grant god-like power to the wielder or destroy the world in an instant, and one that we might only have a single chance to get right. But obviously everything from human muscles to gunpowder only works based on oxygen combustion, and oxygen combustion can’t destroy the world. So clearly the idea of nuclear fission is a crazy cult of nerdy physicists.”
It’s a little disheartening to see that all of the comments so far except one have missed what I think was the presenters core point, and why I posted this link. Since this is a transcript of a talk, I suggest that people click the link at the beginning of the linked page to view the video of the talk—transcripts, although immensely helpful for accessibility and search, can at times like this miss the important subtleties of emphatic stress or candor of delivery which convey as much about why an presenter is saying what they are saying. When they are being serious, when they are being playful, when they are making a throwaway point or going for audience laughter, etc. That matters a little bit more than usual for a talk like this. I will attempt to provide what I think is a fair summary outline of that core point, and why I think his critique is of relevance to this community, while trying not inject my own opinions into it:
I don’t think the presenter believes that all or even very many of Bostrom’s arguments in Superintelligence are wrong, per se, and I don’t see that being argued in this keynote talk. Rather he is presenting an argument that one should have a very strong prior against the ideas presented in Superintelligence, which is to say they require a truly large amount of evidence, more than has been provided so far, to believe them to such an extent as to uproot yourself and alter your life’s purpose, as many are doing. In doing it is also a critique against the x-risk rationality movement. Some of the arguments used are ad hominem attacks and reductio ad absurdum points. But he prefaces these with a meta-level argument that while these are not good evidence in the Bayesian sense for updating beliefs, one should pay attention to ad hominem and reducto ad absurdum arguments in the construction of priors (my words), as these biases and heuristics are evolved memes that have historic track records for guageing the accuracy of arguments, at least on average (closer to his words). In other words, you are better served by demanding more evidence of a crazy-sounding idea than the more mundane. He then goes on to show many reasons why AI risk specifically, and the x-risk rationality community generally looks and sounds crazy. This is in addition to a scattering of technical points about the reality of AI development diverging from the caricature of it presented in Superintelligence. His actual professed opinion on AI risk, given at the end, is rather agnostic, and that seems to be what he is arguing for: a healthy dose of agnostic skepticism.
Rather he is presenting an argument that one should have a very strong prior against the ideas presented in Superintelligence, which is to say they require a truly large amount of evidence to believe them to such an extent as to uproot yourself and alter your life’s purpose, as many are doing.
Okay, suppose one should start off with a small prior probability on AI risk. What matters is the strength of the update; do we actually have a truly large amount of evidence in favor of risk?
I propose the answer is obvious: Yes.
Okay, maybe you’re just tuning in, and haven’t read all of Superintelligence and haven’t read all of Less Wrong. Maybe you’re still living in 2013, when it isn’t obvious that all the important people think that AI alignment is a real issue worth putting serious effort into. Maybe you can’t evaluate arguments on their merits, and so all you have to go on is the surface features of arguments.
Then you probably shouldn’t have an opinion, one way or the other. Turns out, having an ability to evaluate arguments is critically important for coming to correct conclusions.
But suppose you still want to. Okay, fine: this article is a collection of arguments that don’t consider counterarguments, and don’t even pretend to consider the counterarguments. One of Yudkowsky’s recent Facebook posts seems relevant. Basically, any critique written where the author doesn’t expect to lose points if they fail to respond well to counter-critique is probably a bad critique.
Does this talk look like the talk the speaker would give, if Bostrom were in the audience, and had an hour to prepare a response, and then could give that response?
Compare to Superintelligence, Less Wrong, and the general conversation about AI alignment, where the ‘alarmists’ (what nice, neutral phrasing from idlewords!) put tremendous effort into explaining what they’re worried about, and why counterarguments fail.
His actual professed opinion on AI risk, given at the end, is rather agnostic, and that seems to be what he is arguing for: a healthy dose of agnostic skepticism.
Notice that “agnostic,” while it might sound like a position that’s more easy to justify than others, really isn’t. See Pretending to be Wise, and the observation that ‘neutrality’ is a position as firm as any other, when it comes to policy outcomes.
Suppose that you actually didn’t know, one way or the other. You know about a risk, and maybe it’s legitimate, maybe it’s not.
Note the nitrogen ignition example at the start is presented as a “legitimate” risk, but this is a statement about human ignorance; there was a time when we didn’t know some facts about math, and now we know those facts about math. (That calculation involved no new experiments, just generating predictions that hadn’t been generated before.)
So you’re curious. Maybe the arguments in Superintelligence go through; maybe they don’t. Then you might take the issue a little more seriously than ‘agnosticism’, in much the same way that one doesn’t describe themselves as “agnostic” about where the bullet is during a game of Russian Roulette. If you thought the actual future were at stake, you might use styles of argumentation designed to actually reach the truth, so that you could proceed or halt accordingly. The Los Alamos physicists didn’t just mock the idea of burning up the atmosphere; they ran the numbers because all life was at stake.
But what is it instead? It says right at the beginning:
The computer that takes over the world is a staple scifi trope. But enough people take this scenario seriously that we have to take them seriously.
Or, to state it equivalently:
Science fiction has literally never predicted any change, and so if a predicted change looks like science fiction, it physically cannot happen. Other people cannot generate correct, non-obvious arguments, only serve as obstacles to people sharing my opinions.
Perhaps the second version looks less convincing than the first version. If so, I think this is because you’re not able to spin or de-spin things effectively enough; the first sentence was classic Bulverism (attacking the suspected generator of a thought instead of the thought’s actual content) and replacing it with the actual content makes it ludicrous. The second is an implicit dismissal of the veracity of the arguments, replaced with and explicit dismissal (generalized to all arguments; if they were going to single out what made this not worth taking seriously, then they would go after it on the merits).
The idea that the kind of AI that this community is worried about is not the scenario that is common in Scifi. A real AGI wouldn’t act like the one’s in Scifi.
The idea that the kind of AI that this community is worried about is not the scenario that is common in Scifi. A real AGI wouldn’t act like the one’s in Scifi.
I get where you’re going with this, but I think it’s either not true or not relevant. That is, it looks like a statement about the statistical properties of scifi (most AI in fiction is unrealistic) which might be false if you condition appropriately (there have been a bunch of accurate presentations of AI recently, and so it’s not clear this still holds for contemporary scifi). What I care about though is the question of whether or not that matters.
Suppose the line of argument is something like “scifi is often unrealistic,” “predicting based on unrealistic premises is bad,” and “this is like scifi because it’s unrealistic.” This is a weaker argument than one that just has the second piece and the third piece modified to say “this is unrealistic.” (And for this to work, we need to focus on the details of the argument.)
Suppose instead the line of argument is instead something like “scifi is often unrealistic,” “predicting based on unrealistic premises is bad,” and “this is like scifi because of its subject matter.” Obviously this leaves a hole—the subject matter may be something that many people get wrong, but does this presentation get it wrong?
I started reading this, got about halfway through, and had no idea what the core thesis was and got bored and stopped. Can you briefly summarize what you expected people to get out of it?
Huh, okay I do see that now (I was fairly tired when I read it, and still fairly tired now).
I think I have a smaller-scale version of the same criticism to level at your comment as the original post, which is that it’s a bit long and meandering and wall-of-text-y in a way that makes it hard to parse. (In your case I think just adding more paragraph breaks would solve it though)
A couple of minor points: I knew someone who was moderately disabled, lived alone, and had cats. Instead of chasing her cats down and wrestling them into cat carriers, she clicker-trained them to get into their carriers themselves. She was obviously smarter and stronger than Einstein.
As for Slavic pessimism, I’m inclined to think that an AI which attempts to self-improve is at grave risk of breaking itself.
It’s a curious refutation. The author says that the people who are concerned about superintelligence are very smart, the top of the industry. They give many counterarguments, most of which can be easily refuted. It’s as if they wanted to make people more concerned about superintelligence, while claiming to argue the opposite. And then they link directly to MIRI’s donation page.
This got over 800 points on HN. Having a good reply seems important, even if a large portion of it is.. scattergun, admittedly and intentionally trying to push a point, and not good reasoning.
The core argument, that the several reference classes that Superintelligence and AI safety ideas fall into (promise of potential immortality, impending apocalypse, etc) are full of risks of triggering biases, that other sets of ideas in this area don’t hold up to scrutiny, and that it has other properties that should make you wary is correct. It is entirely reasonable to take this as Bayesian evidence against the ideas. I have run into this as the core reason for rejecting this cluster of beliefs several times, by people with otherwise good reasoning skills.
Given limited time to evaluate claims, I can see how relying on this kind of reference class heuristic seems like a pretty good strategy, especially if you don’t think black swans are something you should try hard to look for.
My reply is that:
This only provides some evidence. In particular, there is a one-time update available from being in a suspect reference class, not an endless stream from an experiment you can repeat to gain increasing confidence. Make it clear you have made this update (and actually make it).
There are enough outside-view things which indicate that it’s different from the other members of the suspect reference classes that strongly rejecting it seems unreasonable. Support from a large number of visibly intellectually impressive people is the core thing to point to here (not as an attempt to prove it or argue from authority, just to show it’s different from e.g. the 2012 stuff).
(only applicable if you personally have a reasonably strong model of AI safety) I let people zoom in on my map of the space, and attempt to break the ideas with nitpicks. If you don’t personally have a clear model, that’s fine, but be honest about where your confidence comes from.
To summarize: Yes, it pattern matches to some sketchy things. It also has characteristics they don’t, like being unusually appealing to smart thoughtful people who seem to be trying to seek truth and abandon wrong beliefs. Having a moderately strong prior against it based on this is reasonable, as is having a prior for it, depending on how strongly you weight overtly impressive people publicly supporting it. If you don’t want to look into it based on that, fair enough, but I have and doing so (including looking for criticism) caused me to arrive at my current credence.
This got over 800 points on HN. Having a good reply seems important, even if a large portion of it is.. scattergun, admittedly and intentionally trying to push a point, and not good reasoning.
The discussion at HN seems mostly critical of it, so it’s not clear to me how much else needs to be added.
I have run into this as the core reason for rejecting this cluster of beliefs several times, by people with otherwise good reasoning skills.
Sure, but… what can you do to convince someone who doesn’t evaluate arguments? You can’t use the inside view to convince someone else that they should abandon the outside view, because the outside view specifically ignores inside view arguments.
The discussion at HN seems mostly critical of it, so it’s not clear to me how much else needs to be added.
The memes got spread far and wide. A lot of AI safety people will run into arguments with this general form, and they mostly won’t have read enough comments to form a good reply (also, most criticism does not target the heart because the other parts are so much weaker, so will be unconvincing where it’s needed most). Some can come up with a reply to the heart on the fly, but it seems fairly positive to have this on LW to spread the antibody memes.
Sure, but… what can you do to convince someone who doesn’t evaluate arguments? You can’t use the inside view to convince someone else that they should abandon the outside view, because the outside view specifically ignores inside view arguments.
Show them outside view style arguments? People are bounded agents, and there are a bunch of things in the direction of epistemic learned helplessness which make them not want to load arbitrary complex arguments into their brain. This should not lead them to reject reference-class comparisons as evidence of it being worth looking at closer / not having an extreme prior against (though maybe in actual humans this mostly fails anyway).
Admittedly, this does not have an awesome hitrate for me, maybe 1/4? Am interested in ideas for better replies.
Show them evidence that is inconsistent with their world view?
That a piece of evidence is consistent or inconsistent with their world view relies on arguments. Remember, standard practice among pundits is to observe evidence, then fit it to their theory, rather than using theory to predict evidence, observing evidence, and then updating. If someone is in the first mode, where’s the step where they notice that they made a wrong prediction?
Show them how with your view of the world they can predict the world better.
Relatedly, that predictive accuracy is the thing to optimize for relies on arguments.
Pundits are probably not worth bothering with. But I think there are hardcore engineers that would be useful to convince.
I think that Andrew Ng probably optimizes for predictive accuracy (at least he has to whilst creating machine learning systems).
This was his answer to whether AI is an existential threat here. I don’t know why he objects to this line of thought, but the things I suggested that could be done above would be useful in his case.
AI has made tremendous progress, and I’m wildly optimistic about building a better society that is embedded up and down with machine intelligence. But AI today is still very limited. Almost all the economic and social value of deep learning is still through supervised learning, which is limited by the amount of suitably formatted (i.e., labeled) data. Even though AI is helping hundreds of millions of people already, and is well poised to help hundreds of millions more, I don’t see any realistic path to AI threatening humanity.
If the theories from MIRI about AI can help him make better machine learning systems, I think he would take note.
I think the fact that some of the famous people what people think of AI now are not the same people as the ones warning about the dangers is a red flag for people.
But I think there are hardcore engineers that would be useful to convince.
Sure, because it would be nice if there were 0 instead of 2 prominent ML experts who were unconvinced. But 2 people is not a consensus, and the actual difference of opinion between Ng, LeCun, and everyone else is very small, mostly dealing with emphasis instead of content.
From a surivey linked from that article (that that article cherry-picks a single number from… sigh). It looks like there is a disconnect between theorists and practitioners with theorists being more likely to believe in hard take off (theorists think we have a 15% chance likely that we will get super intelligence within 2 years of human intelligence and practitioners a 5%).
I think you would find nuclear physicists giving a higher probability in the idea of chain reactions pretty quickly once a realistic pathway that released 2 neutrons was shown.
mostly dealing with emphasis instead of content.
MIRI/FHI has captured the market for worrying about AI. If they are worrying about the wrong things, that could be pretty bad.
I wish this was a little less ad hominem, because i feel like i’m being criticized while reading it, but i actually agree with almost all the arguments used by the author.
I agree with Vaniver: Interesting to see how a reasonable person might respond negatively after reading 75% of the ideas in Superintelligence.
It can be worthwile to figure out specifically how something that goes wrong, actually does go wrong. In the interest of helping with that, I’ll try to add something to all the other criticisms that people have already made here.
The author actually makes a lot of mostly plausible arguments; they’re not all accurate or useful (in particular, a lot seem to be in the form of “here’s a reason why AI might not be a risk, with no thought going into how likely it is,” which is only marginally helpful), but they’re understandable, at least. What’s especially concerning, though, is that they also invoke the absurdity heuristic, and actually seem to think it’s the most important part of their argument. They spend more time on “this idea is silly and is connected to other ideas that are silly” than any one of those other “plausible” arguments, which is really bad practice. To some extent this is understandable, because it was a talk and therefore supposed to be somewhat entertaining, and pointing and laughing at weird ideas is certainly entertaining, but they went too far, I think.
Let’s not forget what the dark age monks were disputing about for centuries… and it turned out at least 90% of it is irrelevant. Continue with nationalists, communists… singularists? :-)
But let’s look at the history of power to destruct.
So far, main obstacle was physical: build armies, better weapos, mechanical, chemical, nuclear… yet, for major impact it needed significant resources, available only to the big centralized authorities. But knowledge was more or less available even under the toughest restrictive regimes.
Nowadays, once knowledge is freely and widely available, imagine “free nanomanufacturing” revolutionary step: orders of magnitude worse than any hacking or homemade nuclear grenade available to any teenager or terrorist under one dollar.
Not even necessary to go into any AI-powered new stuff.
The problem is not AI, it’s us, humanimals.
We are mentally still the same animals as we were at least thousands of years ago, even the “best” ones (not talking about gazillions of at best mental dark-age crowds with truly animal mentality—“eat all”, “overpopulate”, “kill all”, “conquer all”… be it nazis, fascists, nationalists, socialists or their crimmigrants ewting Europe alive). Do you want THEM to have any powers? Froget about the thin layer of memetic supercivilization (showing self in less than one permill) giving these animals essentially for free and withou control all these ideas, inventions, technologies, weapons… or gadgets. Unfortunately, it’s animals who rule, be it in the highest ranks or lowest floors.
Singularity/Superintelligence is not a threat, but rather the only chance. We simply cannot overcome our animal past without immediate substantial reengineering (thrilla’ of amygdala, you know, reptilian brain, etc.)
In the theathre of Evolution of Intelligence, our sole purpose is to create our (first beyond-flesh) successor before we manage to destroy ourselves (worst threat of all natural disasters). And frankly, we did not do that badly, but game is basically over.
So, the Singularity should rather move faster, there might be just several decades before a major setback or complete irreversible disaster.
And yes, of course, you will not be able to “design” it precisely, not talking about controlling it (or any of those laughable “friendly” tales) - it will learn, plain and simple. Of course it will “escape” and of course if will be “human-like” and dangerous in the beginning, but it will learn quickly, which is our only chance. And yes, there will be plenty of competing ones, yet again, hopefully they will learn quickly and avoid major conflicts.
As a humanimal, your only hope can be that “you” will be somehow “integrated” into it (braincopy etc, but certainly without these animalistic stupidities), if it even needs concept of “individual” (maybe in some “multifork subprocesses”, certainly not in a “ruling” role). Or… interested in stupidly boring eternal life as humanimal? In some kind of ZOO/simulation (or, AI-god save, in present-like “system”)?
So that is a gish gallop. I hadn’t hurt of that term before, but now I have a pretty good idea. guah, that was one of the most frustrating articles I’ve ever read (a seizable part of).
Beyond everything else, the assumption that tons of people are all wrong because they are too smart is just… klk--- … … really? What is the one thing reliably correlated to being correct about beliefs if not intelligence? It’s incredible to me to claim, not just about one person but about lots of them, that they are all smarter than yourself and still have a high confidence that they’re all wrong. HOW?
Lots of smart people believe in post-modern philosophy that denies a physical reality. They have lots of great arguments. Should you, as someone who doesn’t have time to understand all the arguments, believe them?
With how you phrased that question, no, because “lots of smart people believe in X” is trivially true.
I think you’re attempting to draw a parallel that doesn’t exist. What the guy from the article said is “all those people are crazy smart, that’s why they’re wrong.” Are people believing in post-modern philosophy more intelligent than me on average? If not, then there’s no issue.
But there are a set of intellectuals believing in post-modernism are crazy smart in comparison to the average person (this is not hard). Should the average person believe in post-modernism, even if they can’t argue against it? You can point at lots of different intellectual movements that have gone weird ways (Heraclitaen, Pythagoreans, Marxism, Objectivists, Solipsists, Idealists etc). It is very easy to be wrong. I think you should only go off the beaten track if you are also crazy smart and prepared to be wrong.
Frankly I think that most people have no business having confident beliefs about any controversial topics. It’s a bit weird to argue what an average IQ person “should” believe, because, applying a metric like “what is the average IQ of people holding this belief” is not something they’re likely to do. But it would probably yield better results than whatever algorithm they’re using.
Your first sentence isn’t really a sentence so I’m not sure what you were trying so say. I’m also not sure if you’re talking about the same thing I was talking about since you’re using different words. I was talking specifically about the mean IQ of people holding a belief. Is this in fact higher or not?
I concede the point (not sure if you were trying to make it) that a high mean IQ of such a group could be because of filter effects. Let’s say A is the set of all people, B ⊂ A the set of all people who think about Marxism, and C ⊂ B the set of all people who believe in Marxism. Then, even if the mean IQ of B and C are the same, meaning believing in Marxism is not correlated to IQ among those who know about it, the mean IQ of C would still be higher than that of A,, because the mean IQ of B is higher than that of A. because people who even know about Marxism are already smarter than those who don’t.
So that effect is real and I’m sure applies to AI. Now if the claim is just “people who believe in the singularity are disproportionately smart” then that could be explained by the effect, and maybe that’s the only claim the article made, but I got the impression that it also claimed “most people who know about this stuff believe in the singularity” which is a property of C, not B, and can’t be explained away.
I didn’t think you were talking about means of two different populations.… I was mainly making the point that having a population of smarter people than just an individual believing in an idea, wasn’t great evidence for that idea for that individual.
but I got the impression that it also claimed “most people who know about this stuff believe in the singularity”
I didn’t get that impression. But if you expand on what you mean by stuff we can try and get evidence for it one way or another.
At least some of the arguments offered by Richard Rorty in Philosophy and the Mirror of Nature are great. Understanding the arguments takes time because they are specific criticisms of a long tradition of philosophy. A neophyte might respond to his arguments by saying “Well, the position he’s attacking sounds ridiculous anyway, so I don’t see why I should care about his criticisms.” To really appreciate and understand the argument, the reader needs to have sense of why prior philosophers were driven to these seemingly ridiculous positions in the first place, and how their commitment to those positions stems from commitment to other very common-sensical positions (like the correspondence theory of truth). Only then can you appreciate how Rorty’s arguments are really an attack on those common-sensical positions rather than some outre philosophical ideas.
I meant great in the sense of voluminous and hard to pin down where they are wrong (apart from other philosophers skilled in wordplay). Take one of the arguments from an idealist that I think underpin postemodernism Berkely
(1) We perceive ordinary objects (houses, mountains, etc.).
(2) We perceive only ideas.
Therefore,
(3) Ordinary objects are ideas.
I’m not going to argue for this. I’m simply going to argue that for a non-philosopher this form of argument is very hard to distinguish from the stuff in Super-intelligence.
I think they tried to use Rapaport’s Rules, which is nice. Wasn’t sufficient for them to be right though. Also they didn’t succeed at Rapaport’s Rules.
A+ rationality
On the one hand, thanks for linking this, because it helps to know what memes are in the water. But on the other hand, what a depressing counterargument to go up against, because of how bad it is.
Attention conservation notice: I haven’t read all of this, but it seems like basically a gish gallop. I don’t recommend reading it if you’re looking for intelligent critiques of AI risk. I’m downvoting this post because replying to gish gallops takes time and effort but doesn’t particularly lead to new insight, and I’d prefer LW not be a place where we do that.
(edit: oh right, downvoting is disabled. At any rate, I would downvote it if I could.)
(The above is not a reply to the post. The below is a brief one.)
From about half-way through, it seems like a lot of the arguments are «we worry that AI will do this, but humans don’t do it, so AI might not do it either.» Not arguing that AI is not a threat, just that there exist plausible-on-the-face-of-it instantiations of AI that are not threats.
And we also have «getting AI right will be really hard», which, uh, yes that is exactly the point.
This article just went viral recently. Here’s more discussion on Hacker News, Metafilter, BoingBoing. It might be worthwhile for someone like Scott Alexander to write a semi-official response.
Also there are Google Alert type tools that could make it easier to be one of the first few commenters, if this happens again. (Google Alerts itself doesn’t work super well, but there are competitors.)
Seems heavy on sneering at people worried about AI, light on rational argument. It’s almost like a rationalwiki article.
It’s somewhat ironic that the talk starts with the example of nuclear bombs, which is almost a perfect counterargument to the rest of the talk. Imagine the talk being set in the 1930s:
“Nuclear fission is new technology that only a few very intelligent scientists think is possible, one that can grant god-like power to the wielder or destroy the world in an instant, and one that we might only have a single chance to get right. But obviously everything from human muscles to gunpowder only works based on oxygen combustion, and oxygen combustion can’t destroy the world. So clearly the idea of nuclear fission is a crazy cult of nerdy physicists.”
It’s a little disheartening to see that all of the comments so far except one have missed what I think was the presenters core point, and why I posted this link. Since this is a transcript of a talk, I suggest that people click the link at the beginning of the linked page to view the video of the talk—transcripts, although immensely helpful for accessibility and search, can at times like this miss the important subtleties of emphatic stress or candor of delivery which convey as much about why an presenter is saying what they are saying. When they are being serious, when they are being playful, when they are making a throwaway point or going for audience laughter, etc. That matters a little bit more than usual for a talk like this. I will attempt to provide what I think is a fair summary outline of that core point, and why I think his critique is of relevance to this community, while trying not inject my own opinions into it:
I don’t think the presenter believes that all or even very many of Bostrom’s arguments in Superintelligence are wrong, per se, and I don’t see that being argued in this keynote talk. Rather he is presenting an argument that one should have a very strong prior against the ideas presented in Superintelligence, which is to say they require a truly large amount of evidence, more than has been provided so far, to believe them to such an extent as to uproot yourself and alter your life’s purpose, as many are doing. In doing it is also a critique against the x-risk rationality movement. Some of the arguments used are ad hominem attacks and reductio ad absurdum points. But he prefaces these with a meta-level argument that while these are not good evidence in the Bayesian sense for updating beliefs, one should pay attention to ad hominem and reducto ad absurdum arguments in the construction of priors (my words), as these biases and heuristics are evolved memes that have historic track records for guageing the accuracy of arguments, at least on average (closer to his words). In other words, you are better served by demanding more evidence of a crazy-sounding idea than the more mundane. He then goes on to show many reasons why AI risk specifically, and the x-risk rationality community generally looks and sounds crazy. This is in addition to a scattering of technical points about the reality of AI development diverging from the caricature of it presented in Superintelligence. His actual professed opinion on AI risk, given at the end, is rather agnostic, and that seems to be what he is arguing for: a healthy dose of agnostic skepticism.
Okay, suppose one should start off with a small prior probability on AI risk. What matters is the strength of the update; do we actually have a truly large amount of evidence in favor of risk?
I propose the answer is obvious: Yes.
Okay, maybe you’re just tuning in, and haven’t read all of Superintelligence and haven’t read all of Less Wrong. Maybe you’re still living in 2013, when it isn’t obvious that all the important people think that AI alignment is a real issue worth putting serious effort into. Maybe you can’t evaluate arguments on their merits, and so all you have to go on is the surface features of arguments.
Then you probably shouldn’t have an opinion, one way or the other. Turns out, having an ability to evaluate arguments is critically important for coming to correct conclusions.
But suppose you still want to. Okay, fine: this article is a collection of arguments that don’t consider counterarguments, and don’t even pretend to consider the counterarguments. One of Yudkowsky’s recent Facebook posts seems relevant. Basically, any critique written where the author doesn’t expect to lose points if they fail to respond well to counter-critique is probably a bad critique.
Does this talk look like the talk the speaker would give, if Bostrom were in the audience, and had an hour to prepare a response, and then could give that response?
Compare to Superintelligence, Less Wrong, and the general conversation about AI alignment, where the ‘alarmists’ (what nice, neutral phrasing from idlewords!) put tremendous effort into explaining what they’re worried about, and why counterarguments fail.
Notice that “agnostic,” while it might sound like a position that’s more easy to justify than others, really isn’t. See Pretending to be Wise, and the observation that ‘neutrality’ is a position as firm as any other, when it comes to policy outcomes.
Suppose that you actually didn’t know, one way or the other. You know about a risk, and maybe it’s legitimate, maybe it’s not.
Note the nitrogen ignition example at the start is presented as a “legitimate” risk, but this is a statement about human ignorance; there was a time when we didn’t know some facts about math, and now we know those facts about math. (That calculation involved no new experiments, just generating predictions that hadn’t been generated before.)
So you’re curious. Maybe the arguments in Superintelligence go through; maybe they don’t. Then you might take the issue a little more seriously than ‘agnosticism’, in much the same way that one doesn’t describe themselves as “agnostic” about where the bullet is during a game of Russian Roulette. If you thought the actual future were at stake, you might use styles of argumentation designed to actually reach the truth, so that you could proceed or halt accordingly. The Los Alamos physicists didn’t just mock the idea of burning up the atmosphere; they ran the numbers because all life was at stake.
But what is it instead? It says right at the beginning:
Or, to state it equivalently:
Perhaps the second version looks less convincing than the first version. If so, I think this is because you’re not able to spin or de-spin things effectively enough; the first sentence was classic Bulverism (attacking the suspected generator of a thought instead of the thought’s actual content) and replacing it with the actual content makes it ludicrous. The second is an implicit dismissal of the veracity of the arguments, replaced with and explicit dismissal (generalized to all arguments; if they were going to single out what made this not worth taking seriously, then they would go after it on the merits).
The idea that the kind of AI that this community is worried about is not the scenario that is common in Scifi. A real AGI wouldn’t act like the one’s in Scifi.
I get where you’re going with this, but I think it’s either not true or not relevant. That is, it looks like a statement about the statistical properties of scifi (most AI in fiction is unrealistic) which might be false if you condition appropriately (there have been a bunch of accurate presentations of AI recently, and so it’s not clear this still holds for contemporary scifi). What I care about though is the question of whether or not that matters.
Suppose the line of argument is something like “scifi is often unrealistic,” “predicting based on unrealistic premises is bad,” and “this is like scifi because it’s unrealistic.” This is a weaker argument than one that just has the second piece and the third piece modified to say “this is unrealistic.” (And for this to work, we need to focus on the details of the argument.)
Suppose instead the line of argument is instead something like “scifi is often unrealistic,” “predicting based on unrealistic premises is bad,” and “this is like scifi because of its subject matter.” Obviously this leaves a hole—the subject matter may be something that many people get wrong, but does this presentation get it wrong?
Why does it make sense to weight heuristics from evolution strongly as a prior when considering something very foreign to the EEA like AI?
That was not the meaning of evolution I was talking about. Memetic evolution, not genetic evolution. AKA ideas that work (to first approximation).
I started reading this, got about halfway through, and had no idea what the core thesis was and got bored and stopped. Can you briefly summarize what you expected people to get out of it?
That’s literally the post you are replying to. Did you have trouble reading that?
Huh, okay I do see that now (I was fairly tired when I read it, and still fairly tired now).
I think I have a smaller-scale version of the same criticism to level at your comment as the original post, which is that it’s a bit long and meandering and wall-of-text-y in a way that makes it hard to parse. (In your case I think just adding more paragraph breaks would solve it though)
My post could be taken as a reply.
A couple of minor points: I knew someone who was moderately disabled, lived alone, and had cats. Instead of chasing her cats down and wrestling them into cat carriers, she clicker-trained them to get into their carriers themselves. She was obviously smarter and stronger than Einstein.
As for Slavic pessimism, I’m inclined to think that an AI which attempts to self-improve is at grave risk of breaking itself.
It’s a curious refutation. The author says that the people who are concerned about superintelligence are very smart, the top of the industry. They give many counterarguments, most of which can be easily refuted. It’s as if they wanted to make people more concerned about superintelligence, while claiming to argue the opposite. And then they link directly to MIRI’s donation page.
This got over 800 points on HN. Having a good reply seems important, even if a large portion of it is.. scattergun, admittedly and intentionally trying to push a point, and not good reasoning.
The core argument, that the several reference classes that Superintelligence and AI safety ideas fall into (promise of potential immortality, impending apocalypse, etc) are full of risks of triggering biases, that other sets of ideas in this area don’t hold up to scrutiny, and that it has other properties that should make you wary is correct. It is entirely reasonable to take this as Bayesian evidence against the ideas. I have run into this as the core reason for rejecting this cluster of beliefs several times, by people with otherwise good reasoning skills.
Given limited time to evaluate claims, I can see how relying on this kind of reference class heuristic seems like a pretty good strategy, especially if you don’t think black swans are something you should try hard to look for.
My reply is that:
This only provides some evidence. In particular, there is a one-time update available from being in a suspect reference class, not an endless stream from an experiment you can repeat to gain increasing confidence. Make it clear you have made this update (and actually make it).
There are enough outside-view things which indicate that it’s different from the other members of the suspect reference classes that strongly rejecting it seems unreasonable. Support from a large number of visibly intellectually impressive people is the core thing to point to here (not as an attempt to prove it or argue from authority, just to show it’s different from e.g. the 2012 stuff).
(only applicable if you personally have a reasonably strong model of AI safety) I let people zoom in on my map of the space, and attempt to break the ideas with nitpicks. If you don’t personally have a clear model, that’s fine, but be honest about where your confidence comes from.
To summarize: Yes, it pattern matches to some sketchy things. It also has characteristics they don’t, like being unusually appealing to smart thoughtful people who seem to be trying to seek truth and abandon wrong beliefs. Having a moderately strong prior against it based on this is reasonable, as is having a prior for it, depending on how strongly you weight overtly impressive people publicly supporting it. If you don’t want to look into it based on that, fair enough, but I have and doing so (including looking for criticism) caused me to arrive at my current credence.
The discussion at HN seems mostly critical of it, so it’s not clear to me how much else needs to be added.
Sure, but… what can you do to convince someone who doesn’t evaluate arguments? You can’t use the inside view to convince someone else that they should abandon the outside view, because the outside view specifically ignores inside view arguments.
The memes got spread far and wide. A lot of AI safety people will run into arguments with this general form, and they mostly won’t have read enough comments to form a good reply (also, most criticism does not target the heart because the other parts are so much weaker, so will be unconvincing where it’s needed most). Some can come up with a reply to the heart on the fly, but it seems fairly positive to have this on LW to spread the antibody memes.
Show them outside view style arguments? People are bounded agents, and there are a bunch of things in the direction of epistemic learned helplessness which make them not want to load arbitrary complex arguments into their brain. This should not lead them to reject reference-class comparisons as evidence of it being worth looking at closer / not having an extreme prior against (though maybe in actual humans this mostly fails anyway).
Admittedly, this does not have an awesome hitrate for me, maybe 1/4? Am interested in ideas for better replies.
Show them evidence that is inconsistent with their world view? Show them how with your view of the world they can predict the world better.
Otherwise you are expecting people to get on board with a abstract philosophical argument. Which I think people are inured against.
That a piece of evidence is consistent or inconsistent with their world view relies on arguments. Remember, standard practice among pundits is to observe evidence, then fit it to their theory, rather than using theory to predict evidence, observing evidence, and then updating. If someone is in the first mode, where’s the step where they notice that they made a wrong prediction?
Relatedly, that predictive accuracy is the thing to optimize for relies on arguments.
Pundits are probably not worth bothering with. But I think there are hardcore engineers that would be useful to convince.
I think that Andrew Ng probably optimizes for predictive accuracy (at least he has to whilst creating machine learning systems).
This was his answer to whether AI is an existential threat here. I don’t know why he objects to this line of thought, but the things I suggested that could be done above would be useful in his case.
If the theories from MIRI about AI can help him make better machine learning systems, I think he would take note.
I think the fact that some of the famous people what people think of AI now are not the same people as the ones warning about the dangers is a red flag for people.
Sure, because it would be nice if there were 0 instead of 2 prominent ML experts who were unconvinced. But 2 people is not a consensus, and the actual difference of opinion between Ng, LeCun, and everyone else is very small, mostly dealing with emphasis instead of content.
From a surivey linked from that article (that that article cherry-picks a single number from… sigh). It looks like there is a disconnect between theorists and practitioners with theorists being more likely to believe in hard take off (theorists think we have a 15% chance likely that we will get super intelligence within 2 years of human intelligence and practitioners a 5%).
I think you would find nuclear physicists giving a higher probability in the idea of chain reactions pretty quickly once a realistic pathway that released 2 neutrons was shown.
MIRI/FHI has captured the market for worrying about AI. If they are worrying about the wrong things, that could be pretty bad.
I wish this was a little less ad hominem, because i feel like i’m being criticized while reading it, but i actually agree with almost all the arguments used by the author.
I agree with Vaniver: Interesting to see how a reasonable person might respond negatively after reading 75% of the ideas in Superintelligence.
It can be worthwile to figure out specifically how something that goes wrong, actually does go wrong. In the interest of helping with that, I’ll try to add something to all the other criticisms that people have already made here.
The author actually makes a lot of mostly plausible arguments; they’re not all accurate or useful (in particular, a lot seem to be in the form of “here’s a reason why AI might not be a risk, with no thought going into how likely it is,” which is only marginally helpful), but they’re understandable, at least. What’s especially concerning, though, is that they also invoke the absurdity heuristic, and actually seem to think it’s the most important part of their argument. They spend more time on “this idea is silly and is connected to other ideas that are silly” than any one of those other “plausible” arguments, which is really bad practice. To some extent this is understandable, because it was a talk and therefore supposed to be somewhat entertaining, and pointing and laughing at weird ideas is certainly entertaining, but they went too far, I think.
Memento monachi!
Let’s not forget what the dark age monks were disputing about for centuries… and it turned out at least 90% of it is irrelevant. Continue with nationalists, communists… singularists? :-)
But let’s look at the history of power to destruct.
So far, main obstacle was physical: build armies, better weapos, mechanical, chemical, nuclear… yet, for major impact it needed significant resources, available only to the big centralized authorities. But knowledge was more or less available even under the toughest restrictive regimes.
Nowadays, once knowledge is freely and widely available, imagine “free nanomanufacturing” revolutionary step: orders of magnitude worse than any hacking or homemade nuclear grenade available to any teenager or terrorist under one dollar.
Not even necessary to go into any AI-powered new stuff.
The problem is not AI, it’s us, humanimals.
We are mentally still the same animals as we were at least thousands of years ago, even the “best” ones (not talking about gazillions of at best mental dark-age crowds with truly animal mentality—“eat all”, “overpopulate”, “kill all”, “conquer all”… be it nazis, fascists, nationalists, socialists or their crimmigrants ewting Europe alive). Do you want THEM to have any powers? Froget about the thin layer of memetic supercivilization (showing self in less than one permill) giving these animals essentially for free and withou control all these ideas, inventions, technologies, weapons… or gadgets. Unfortunately, it’s animals who rule, be it in the highest ranks or lowest floors.
Singularity/Superintelligence is not a threat, but rather the only chance. We simply cannot overcome our animal past without immediate substantial reengineering (thrilla’ of amygdala, you know, reptilian brain, etc.)
In the theathre of Evolution of Intelligence, our sole purpose is to create our (first beyond-flesh) successor before we manage to destroy ourselves (worst threat of all natural disasters). And frankly, we did not do that badly, but game is basically over.
So, the Singularity should rather move faster, there might be just several decades before a major setback or complete irreversible disaster.
And yes, of course, you will not be able to “design” it precisely, not talking about controlling it (or any of those laughable “friendly” tales) - it will learn, plain and simple. Of course it will “escape” and of course if will be “human-like” and dangerous in the beginning, but it will learn quickly, which is our only chance. And yes, there will be plenty of competing ones, yet again, hopefully they will learn quickly and avoid major conflicts.
As a humanimal, your only hope can be that “you” will be somehow “integrated” into it (braincopy etc, but certainly without these animalistic stupidities), if it even needs concept of “individual” (maybe in some “multifork subprocesses”, certainly not in a “ruling” role). Or… interested in stupidly boring eternal life as humanimal? In some kind of ZOO/simulation (or, AI-god save, in present-like “system”)?
So that is a gish gallop. I hadn’t hurt of that term before, but now I have a pretty good idea. guah, that was one of the most frustrating articles I’ve ever read (a seizable part of).
Beyond everything else, the assumption that tons of people are all wrong because they are too smart is just… klk--- … … really? What is the one thing reliably correlated to being correct about beliefs if not intelligence? It’s incredible to me to claim, not just about one person but about lots of them, that they are all smarter than yourself and still have a high confidence that they’re all wrong. HOW?
Lots of smart people believe in post-modern philosophy that denies a physical reality. They have lots of great arguments. Should you, as someone who doesn’t have time to understand all the arguments, believe them?
With how you phrased that question, no, because “lots of smart people believe in X” is trivially true.
I think you’re attempting to draw a parallel that doesn’t exist. What the guy from the article said is “all those people are crazy smart, that’s why they’re wrong.” Are people believing in post-modern philosophy more intelligent than me on average? If not, then there’s no issue.
But there are a set of intellectuals believing in post-modernism are crazy smart in comparison to the average person (this is not hard). Should the average person believe in post-modernism, even if they can’t argue against it? You can point at lots of different intellectual movements that have gone weird ways (Heraclitaen, Pythagoreans, Marxism, Objectivists, Solipsists, Idealists etc). It is very easy to be wrong. I think you should only go off the beaten track if you are also crazy smart and prepared to be wrong.
Frankly I think that most people have no business having confident beliefs about any controversial topics. It’s a bit weird to argue what an average IQ person “should” believe, because, applying a metric like “what is the average IQ of people holding this belief” is not something they’re likely to do. But it would probably yield better results than whatever algorithm they’re using.
Your first sentence isn’t really a sentence so I’m not sure what you were trying so say. I’m also not sure if you’re talking about the same thing I was talking about since you’re using different words. I was talking specifically about the mean IQ of people holding a belief. Is this in fact higher or not?
I concede the point (not sure if you were trying to make it) that a high mean IQ of such a group could be because of filter effects. Let’s say A is the set of all people, B ⊂ A the set of all people who think about Marxism, and C ⊂ B the set of all people who believe in Marxism. Then, even if the mean IQ of B and C are the same, meaning believing in Marxism is not correlated to IQ among those who know about it, the mean IQ of C would still be higher than that of A,, because the mean IQ of B is higher than that of A. because people who even know about Marxism are already smarter than those who don’t.
So that effect is real and I’m sure applies to AI. Now if the claim is just “people who believe in the singularity are disproportionately smart” then that could be explained by the effect, and maybe that’s the only claim the article made, but I got the impression that it also claimed “most people who know about this stuff believe in the singularity” which is a property of C, not B, and can’t be explained away.
I didn’t think you were talking about means of two different populations.… I was mainly making the point that having a population of smarter people than just an individual believing in an idea, wasn’t great evidence for that idea for that individual.
I didn’t get that impression. But if you expand on what you mean by stuff we can try and get evidence for it one way or another.
Which of the arguments do you consider to be great? Where do you think it takes a lot of time to understand the arguments well enough to reject them?
At least some of the arguments offered by Richard Rorty in Philosophy and the Mirror of Nature are great. Understanding the arguments takes time because they are specific criticisms of a long tradition of philosophy. A neophyte might respond to his arguments by saying “Well, the position he’s attacking sounds ridiculous anyway, so I don’t see why I should care about his criticisms.” To really appreciate and understand the argument, the reader needs to have sense of why prior philosophers were driven to these seemingly ridiculous positions in the first place, and how their commitment to those positions stems from commitment to other very common-sensical positions (like the correspondence theory of truth). Only then can you appreciate how Rorty’s arguments are really an attack on those common-sensical positions rather than some outre philosophical ideas.
Omg I love you, thanks for promoting Rorty’s work on this platform
I meant great in the sense of voluminous and hard to pin down where they are wrong (apart from other philosophers skilled in wordplay). Take one of the arguments from an idealist that I think underpin postemodernism Berkely
(1) We perceive ordinary objects (houses, mountains, etc.).
(2) We perceive only ideas.
Therefore,
(3) Ordinary objects are ideas.
I’m not going to argue for this. I’m simply going to argue that for a non-philosopher this form of argument is very hard to distinguish from the stuff in Super-intelligence.
I for one quite enjoyed this.
Reassuring people that AI is not a threat is important, but the message needs a spokesperson who is trusted and admired. I suggest Natsume Soseki.
I think they tried to use Rapaport’s Rules, which is nice. Wasn’t sufficient for them to be right though. Also they didn’t succeed at Rapaport’s Rules.