So, some writer named Cathy O’Neil wrote about futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments. Instead, she points out considerations like this:
First up: the people who believe in the singularity and are not worried about it. […] These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed.
She doesn’t engage with the content of their arguments about the future. I used to find this sort of thing inexplicable and annoying. Now I just find it sad but reasonable.
O’Neil is operating under the assumption that the denotative content of the futurists’ arguments is not relevant, except insofar as it affects the enactive content of their speech. In other words, their ideology is part of a process of coalition formation, and taking it seriously is for suckers.
AI and ad hominem
Scott Alexander of Slate Star Codex recently complained about O’Neil’s writing:
It purports to explain what we should think about the future, but never makes a real argument for it. It starts by suggesting there are two important axes on which futurists can differ: optimism vs. pessimism, and belief in a singularity. So you can end up with utopian singularitarians, dystopian singularitarians, utopian incrementalists, and dystopian incrementalists. We know the first three groups are wrong, because many of their members are “young or middle-age white men” who “have never been oppressed”. On the other hand, the last group contains “majority women, gay men, and people of color”. Therefore, the last group is right, there will be no singularity, and the future will be bad.
[…]
The author never even begins to give any argument about why the future will be good or bad, or why a singularity might or might not happen. I’m not sure she even realizes this is an option, or the sort of thing some people might think relevant.
Scott doesn’t have a solution to the problem, but he’s taking the right first step—trying to create common knowledge about the problem, and calling for others to do the same:
I wish ignoring this kind of thing was an option, but this is how our culture relates to things now. It seems important to mention that, to have it out in the open, so that people who turn out their noses at responding to this kind of thing don’t wake up one morning and find themselves boxed in. And if you’ve got to call out crappy non-reasoning sometime, then meh, this article seems as good an example as any.
Scott’s interpretation seems basically accurate, as far as it goes. It’s true that O’Neil doesn’t engage with the content of futurists’ arguments. It’s true that this is a problem.
The thing is, perhaps she’s right not to engage with the content of futurists’ arguments. After all, as Scott pointed out years ago (and I reiterated more recently), when the single most prominent AI risk organization initially announced its mission, it was a mission that basically 100% of credible arguments about AI risk imply to be the exact wrong thing. If you had assumed that the content of futurists’ arguments about AI risk would be a good guide to the actions taken as a result, you would quite often be badly mistaken.
Of course, maybe you disbelieve the mission statement instead of the futurists’ arguments. Or maybe you believe both, but disbelieve the claim that OpenAI is working on AI risk relevant things. Anyhow you slice it, you have to dismiss some of the official communication as falsehood, by someone who is in a position to know better.
So, why is it so hard to talk about this?
World of actors, world of scribes
The immediately prior Slate Star Codex post, Different Worlds, argued that if someone’s basic world view seems obviously wrong to you based on all of your personal experience, maybe their experience is really different. In another Slate Star codex post, titled Might People on the Internet Sometimes Lie?, Scott described how difficult he finds it to consider the hypothesis that someone is lying, despite strong reason to believe that lying is common.
Let’s combine these insights.
Scott lives in a world in which many people—the most interesting ones—are basically telling the truth. They care about the content of arguments, and are willing to make major life changes based on explicit reasoning. In short, he’s a member of the scribe caste. O’Neil lives in actor-world, in which words are primarily used as commands, or coalition-building narratives.
If Scott thinks that paying attention to the contents of arguments is a good epistemic strategy, and the writer he’s complaining about thinks that it’s a bad strategy, this suggests an opportunity for people like Scott to make inferences about what other people’s very different life experiences are like. (I worked through an example of this myself in my post about locker room talk.)
It now seems to me like the experience of the vast majority of people in our society is that when someone is making abstract arguments, they are more likely to be playing coalitional politics, than trying to transmit information about the structure of the world.
Clever arguers
For this reason, I noted with interest the implications of an exchange in the comments to Jessica Taylor’s recent Agent Foundations post on autopoietic systems and AI alignment. Paul Christiano and Wei Dai considered the implications of clever arguers, who might be able to make superhumanly persuasive arguments for arbitrary points of view, such that a secure internet browser might refuse to display arguments from untrusted sources without proper screening.
Wei Dai writes:
I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.
Christiano responds:
It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.
What if most people already live in that world? A world in which taking arguments at face value is not a capacity-enhancing tool, but a security vulnerability? Without trusted filters, would they not dismiss highfalutin arguments out of hand, and focus on whether the person making the argument seems friendly, or unfriendly, using hard to fake group-affiliation signals? This bears a substantial resemblance to the behavior Scott was complaining about. As he paraphrases:
We know the first three groups are wrong, because many of their members are “young or middle-age white men” who “have never been oppressed”. On the other hand, the last group contains “majority women, gay men, and people of color”. Therefore, the last group is right, there will be no singularity, and the future will be bad.
Translated properly, this simply means, “There are four possible beliefs to hold on this subject. The first three are held by parties we have reason to distrust, but the fourth is held by members of our coalition. Therefore, we should incorporate the ideology of the fourth group into our narrative.”
This is admirably disjunctive reasoning. It is also really, really sad. It is almost a fully general defense against discourse. It’s also not something I expect we can improve by browbeating people, or sneering at them for not understanding how arguments work. The sad fact is that people wouldn’t have these defenses up if it didn’t make sense to them to do so.
When I read Scott’s complaints, I was persuaded that O’Neil was fundamentally confused. But then I clicked through to her piece, I was shocked at how good it was. (To be fair, Scott did a very good job lowering my expectations.) She explains her focus quite explicitly:
And although it can be fun to mock them for their silly sounding and overtly religious predictions, we should take futurists seriously. Because at the heart of the futurism movement lies money, influence, political power, and access to the algorithms that increasingly rule our private, political, and professional lives.
Google, IBM, Ford, and the Department of Defense all employ futurists. And I am myself a futurist. But I have noticed deep divisions and disagreements within the field, which has led me, below, to chart the four basic “types” of futurists. My hope is that by better understanding the motivations and backgrounds of the people involved—however unscientifically—we can better prepare ourselves for the upcoming political struggle over whose narrative of the future we should fight for: tech oligarchs that want to own flying cars and live forever, or gig economy workers that want to someday have affordable health care.
I agree with Scott that the content of futurists’ arguments matters, and that it has to be okay to engage with that somewhere. But it also has to be okay to engage with the social context of futurists’ arguments, and an article that specifically tells you it’s about that seems like the most prosocial and scribe-friendly possible way to engage in that sort of discussion. If we’re going to whine about that, then in effect we’re just asking people to shut up and pretend that futurist narratives aren’t being used as shibboleths to build coalitions. That’s dishonest.
Most people in traditional scribe rolls are not proper scribes, but a fancy sort of standard-bearer. If we respond to people displaying the appropriate amount of distrust by taking offense—if we insist that they spend time listening to our arguments simply because we’re scribes—then we’re collaborating with the deception. If we really are more trustworthy, we should be able to send costly signals to that effect. The right thing to do is to try to figure out whether we can credibly signal that we are actually trustworthy, by means of channels that have not yet been compromised.
And, of course, to actually become trustworthy. I’m still working on that one.
The walls have already been breached. The barbarians are sacking the city. Nobody likes your barbarian Halloween costume.
So, if Cathy O’Neil is writing posts about futurism-the-social-phenomenon from a leftist and mostly negative point of view, and Scott is writing pleas for there to be discussion of futurism-as-in-the-actual-future, then where are the people who are writing about futurism-the-social-phenomenon from a positive point of view? Where are the people who are forming coalitions on the other side from O’Neil? Where are the ideologues she’s afraid of? Can I join them?
Well, upon actually reading the article, it seems that many of the people she’s scared of aren’t ideological in the way I’d normally think of the word. The “prophets of capitalism” she mentions are Sheryl Sandberg, Oprah Winfrey, Bill Gates, and John Mackey—extremely rich, politically moderate, and hardly anybody I could go to a party with. Ray Kurzweil doesn’t produce a lot of content; the Seasteaders have been quiet for years. I’m used to “ideologies” or “movements” having, y’know, a vanguard—a population of young readers and writers producing endless discussion and propaganda. Techno-optimists don’t, it seems. That doesn’t mean they don’t have power, of course, but it’s mostly practical power (money, technology) and implicit power (framing, marketing) rather than mindshare.
With the exception of the EA/X-risk people, who are more conventionally “ideological” in the sense of being big talkers. But, again, those people are mostly like Scott, at least claiming to be interested in the actual future rather than the “future” as a lens upon the present.
It’s just weird to me. The O’Neils of the world don’t have an equal-and-opposite opposition. They have “opponents” who are doing very different things than them. It’s almost an asymmetric warfare situation.
It is! It is definitely analogous to an asymmetric warfare situation, and like many asymmetric warfare situations, the entrenched side’s narrative is that it’s just trying to run things sensibly, not aggress on anyone or self-aggrandize.
I think this is related to our culture’s aestheticization of revolutionaries somehow, but am not sure exactly how.
Ok, so I suspect “ideology” in the sense of “production of lots of identity-flavored text and protests and sometimes art” is just modeled on socialist & communist parties. Predictably, actual leftists are going to be better at that than other ideologies, because they’ve been at it longer. Predictably, most coalitions who oppose leftists are going to be using tactics other than ideology. (Fascists are an exception, they really are using similar means to their opponents on the left.)
Which means it’s not ever going to be a *fair* fight. Scott and Cathy are never going to meet on the same playing field and use the same methods against each other and see who is the stronger. I’m trying to imagine how someone *would* set up such an equal contest, because that would seem a lot more aesthetic, but I’m failing to visualize it.
If you are capable of seeing two levels, the social and the intellectual, then this is annoying, because one side has totally ceded the social realm and the other has totally ceded the intellectual realm, so you never actually get critique happening.
A “fair” fight might be possible if insiders bothered to argue “here is why I think this system is a good way to organize things and processing the most relevant bits of information, and why I feel OK accepting this huge endowment of structural power” instead of taking their methods as a background fact in no need of justification. Maybe it wouldn’t persuade the Cathys O’Neil of the world, but I’d bet it would attract better critics.
GiveWell made a decent attempt at times, and did in fact get some good critics eventually.
Firstly, it is completely reasonable to look at the motivations of people in a movement when trying to evaluate what the world will look like if they were given more power and influence. My objection is the way that she went about this.
”But it also has to be okay to engage with the social context of futurists’ arguments, and an article that specifically tells you it’s about that seems like the most prosocial and scribe-friendly possible way to engage in that sort of discussion”
It would say that this article was neither pro-social nor scribe-friendly so I have no issues with discouraging this kind of writing. It is not pro-social as she essentialises various groups instead of talking about their biases or incentives, which furthers a certain superweapon that is currently being built against groups considered privileged.
It is not scribe-friendly because her treatment of various actors is extremely unnuanced and unpersuasive. A more realistic characterisation would do things such as explain that optimists are more likely become entrepreneurs and therefore to overestimate the positive impacts of technology. Or it could lament that the public good is dependent on the innovation of private actors, but that these actors will tend to focus their efforts on the issues that most affect them. These ideas are present, but in a form that is optimised to earn her social status within her ingroup by attacking an outgroup, rather than in a form that is designed to be persuasive.
Your Clever Arguers section is relate to the Epistemic Learned Helplessness post on Scott’s old blog. There are undoubtedly circumstances where it can be beneficial to refuse to be persuaded by discourse because you lack the ability to evaluate the arguments. Nonetheless, I don’t get the impression from the article that the author is declaring her inability to evaluate the situation, as opposed to confidently stating her belief that Q4 is the correct one.
Insofar as there are two sides here, one includes both someone who is literally building rockets and talking about his plans to establish a Mars base, and someone who managed to scare the first person with “BWAHAHA my AI will follow you to Mars, nowhere is safe.” And also multiple other parties openly planning to build a superintelligence that will forcibly overthrow all the worlds’ governments simultaneously and take over the entire accessible universe.
The other side has written some magazine articles making fun of the first side.
Which side is building the superweapons again?
To clarify, I specifically meant that indicating her working hypotheses up front was scribe-friendly and prosocial, not that everything about the article was.
I think I was a bit unclear on the learned helplessness thing. It’s only part of what’s going on—another important part is modeling these narratives as coordination mechanisms for coalitions with various short-term goals and methods.
I’d like to recommend this for Front Page and possibly Featured, for building on existing discourse in a useful way.
I don’t think it fits on the frontpage, mostly because it is about mindkilling topics that I would usually categorize as politics (which is also why the relevant Scott post isn’t on LessWrong at all). It also focuses primarily about people as opposed to ideas, with a large part of the point making inferences about the epistemics of other tribes and our own tribe. While I think that discussion should happen somewhere, I don’t think the LessWrong frontpage is the place for that.
I agree that it’s a good article making good points with good rhetoric, but I don’t think it’s worth the risk of bringing these topics into the frontpage.
Ah, that is fair. (Actually, I guess I just assumed the previous post was also on the Front Page, but upon my reflection my thought at the time was “man this is good but man it shouldn’t be on the LW front-page”)
I think this post is a bit more at the meta level than the previous one, but yeah, agree with your point here.
I’d like to see either of you (or anyone else) write another post, making the same basic points, but in a more careful and frontpagepostworthy way. In case anyone is inhibited from doing that by anti-plagiarism norms or something, I hereby repudiate that dubious protection. A simple link to prior work is entirely adequate credit, even if you “steal” my work up to the point of copy-pasting large sections. I’d be happy to consult on whether the basic point was gotten if that’s wanted, but don’t want to be the barrier between a clear expression of these ideas and the reading public.
I want to promote this comment, but I can’t yet promote comments.
I concur. I will agree with Raemon’s point though, that it’s really enjoyable to see ideas from different conversations within the community come together.
Very depressing!
I agree that it seems reasonable to expect some people to be blinded by distrust. That’s a good point.
Reading O’Neil’s article, i like the quadrant model more than i expected. That seems like a useful increase in resolution. However i disagree about which demographics fall into which quadrant. Even if we limit our scope to the USA, i’m sure many women and people of color are worried about machines displacing humanity (Q2).
I think there is plenty of software in the world that encodes racist or otherwise unfair policies (as in the Q4 paragraphs), and the fact that this discrimination is sometimes concealed by the term ‘AI’ is a serious issue. But i think this problem deserves a more rigorous defense than this O’Neil article.