Over the years roughly between 2015 and 2020 (though I might be off by a year or two), it seemed to me like numerous AI safety advocates were incredibly rude to LeCun, both online and in private communications.
I think this generalizes to more than LeCun. Screencaps of Yudkowsky’s Genocide the Borderers Facebook post still circulated around right wing social media in response to mentions of him for years, which makes forming any large coalition rather difficult. Would you trust someone who posted that with power over your future if you were a Borderer or had values similar to them?
(Or at least it was the goto post until Yudkowsky posted that infanticide up to 18 months wasn’t bad in response to a Caplan poll. Now that’s the post used to dismiss anything Yudkowsky says.)
Yeah, see, my equivalent of making ominous noises about the Second Amendment is to hint vaguely that there are all these geneticists around, and gene sequencing is pretty cheap now, and there’s this thing called CRISPR, and they can probably figure out how to make a flu virus that cures Borderer culture by excising whatever genes are correlated with that and adding genes correlated with greater intelligence. Not that I’m saying anyone should try something like that if a certain person became US President. Just saying, you know, somebody might think of it.
Reading it again almost 7 years later, it’s just so fractaly bad. There are people out there with guns, while the proposed technology to CRISPR a flu that gene changes people’s genes is science fiction so they top frame is nonsense. The actual viral payload, if such a thing could exist, would be genocide of a people (no you do not need to kill people to be genocide, this is still a central example). The idea wouldn’t work for so many reasons: a) peoples are a genetic distribution cluster instead of a set of Gene A, Gene B, Gene C; b) we don’t know all of these genes; c) in other contexts, Yudkowsky’s big idea is the orthogonality thesis so focusing on making his outgroup smarter is sort of weird; d) actually, the minimum message length of this virus would be unwieldy even if we knew all of the genes to target to the point where I don’t know whether this would be feasible even if we had viruses that could do small gene edits; and of course, e) this is all a cheap shot where he’s calling for genocide over partisan politics which we can now clearly say: the Trump presidency was not a thing to call for a genocide of his voters over.
(In retrospect (and with the knowledge that these sorts of statements are always narrativizing a more complex past), this post was roughly the inflection point where I went gradually started moving from “Yudkowsky is a genius who is one of the few people thinking about the world’s biggest problems” to “lol, what’s Big Yud catastrophizing about today?” First seeing that he was wrong about some things meant that it was easier to think critically about other things he said, and here we are today, but that’s dragging the conversation in a very different direction than your OP.)
no you do not need to kill people to be genocide, this is still a central example
A central example, really?
When I think of genocide, killing people is definitely what comes to mind. I agree that’s not necessary, but wikipedia says:
In 1948, the United Nations Genocide Convention defined genocide as any of five “acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group.” These five acts were: killing members of the group, causing them serious bodily or mental harm, imposing living conditions intended to destroy the group, preventing births, and forcibly transferring children out of the group.
I don’t think it’s centrally any of those actions, or centrally targeted at any of those groups.
Which isn’t to say you can’t call it genocide, but I really don’t think it’s a central example.
in other contexts, Yudkowsky’s big idea is the orthogonality thesis so focusing on making his outgroup smarter is sort of weird
This doesn’t seem weird to me. I don’t think the orthogonality thesis is true in humans (i.e. I think smarter humans tend to be more value aligned with me); and sometimes making non-value-aligned agents smarter is good for you (I’d rather play iterated prisoner’s dilemma with someone smart enough to play tit-for-tat than someone who can only choose between being CooperateBot or DefectBot).
I was going to write something saying “no actually we have the word genocide to describe the destruction of a peoples,” but walked away because I didn’t think that’d be a productive argument for either of us. But after sleeping on it, I want to respond to your other point:
I don’t think the orthogonality thesis is true in humans (i.e. I think smarter humans tend to be more value aligned with me); and sometimes making non-value-aligned agents smarter is good for you (I’d rather play iterated prisoner’s dilemma with someone smart enough to play tit-for-tat than someone who can only choose between being CooperateBot or DefectBot).
My actual experience over the last decade is that some form of the above statement isn’t true. As a large human model trained on decades of interaction, my immediate response to querying my own next experience predictor in situations around interacting with smarter humans is: no strong correlation with my values and will defect unless there’s a very strong enforcement mechanism (especially in finance, business and management). (Presumably because in our society, most games aren’t iterated—or if they are iterated are closer to the dictator game instead of the prisoner’s dilemma—but I’m very uncertain about causes and am much more worried about previous observed outputs.)
I suspect that this isn’t going to be convincing to you because I’m giving you the output of a fuzzy statistical model instead of giving you a logical verbalized step by step argument. But the deeper crux is that I believe “The Rationalists” heavily over-weigh the second and under-weigh the first, when the first is a much more reliable source of information: it was generated by entanglement with reality in a way that mere arguments aren’t.
And I suspect that’s a large part of the reason why we—and I include myself with the Rationalists at that point in time—were blindsided by deep learning and connectionism winning: we expected intelligence to require some sort of symbolic reasoning and focusing on explicit utility functions and formal decision theory and maximizing things...and none of that seems even relevant to the actual intelligences we’ve made, which are doing fuzzy statistical learning on their training sets, arguably, just the way we are.
So I mostly don’t disagree with what you say about fuzzy statistical models versus step by step arguments. But also, what you said is indeed not very convincing to me, I guess in part because it’s not like my “I think smarter humans tend to be more value aligned with me” was the output of a step by step argument either. So when the output of your fuzzy statistical model clashes with the output of my fuzzy statistical model, it’s hardly surprising that I don’t just discard my own output and replace it with yours.
I’m also not simply discarding yours, but there’s not loads I can do with it as-is—like, you’ve given me the output of your fuzzy statistical model, but I still don’t have access to the model itself. I think if we cared enough to explore this question in more depth (which I probably don’t, but this meta thread is interesting) we’d need to ask things like “what exactly have we observed”, “can we find specific situations where we anticipate different things”, “do we have reason to trust one person’s fuzzy statistical models over another”, “are we even talking about the same thing here”.
(In retrospect (and with the knowledge that these sorts of statements are always narrativizing a more complex past), this post was roughly the inflection point where I went gradually started moving from “Yudkowsky is a genius who is one of the few people thinking about the world’s biggest problems” to “lol, what’s Big Yud catastrophizing about today?” First seeing that he was wrong about some things meant that it was easier to think critically about other things he said, and here we are today, but that’s dragging the conversation in a very different direction than your OP.)
If I had to put down my own inflection point in where I started getting worried about Yudkowsky’s epistemics and his public statements around AI risk, it would be the Time article. It showed to me 2 problems:
Yudkowsky has a big problem with overconfidence, and in general made many statements on the Time article that are misleading at best, and the general public likely wouldn’t know the statements are misleading.
Yudkowsky is terrible at PR, and generally is unable to talk about AI risk without polarizing people. Given that AI risk is thankfully mostly unpolarized, and outside of politics, I am getting concerned that Yudkowsky is a terrible public speaker/communicator on AI risk, even worse than some AI protests.
Edit: I sort of retract my statement. While I still think Eliezer is veering dangerously close to hoping for warfare and possible mass deaths over GPU clusters, I do retract the specific claim of Eliezer advocating nukes. It was instead on a second reading airstrikes and acts of war, but no claims of nuking other countries. I misremembered the actual claims made in the Time article.
it would be the Time article and his calls for nuclear strikes on AI centers.
(edit: I see Noosphere has since edited his comment, which seems good, but, leaving this up for posterity)
He did not call for nuclear strikes on AI centers, and while I think this was an understandable thing to misread him on initially by this point we’ve had a whole bunch of discussions about it and you have no excuse to continue spreading falsehoods about what he said.
I think there are reasonable things to disagree with Eliezer on and reasonable things to argue about his media presence, but please stop lying.
This is kind of the point where I despair about LessWrong and the rationalist community.
While I agree that he did not call for nuclear first strikes on AI centers, he said:
If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
and
Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Asking us to be OK with provoking a nuclear second strike by attacking a nation that is not actually a signatory to an international agreement banning building gpu clusters that’s building a gpu cluster is actually still bad, and whether the nukes fly as part of the first strike or the retaliatory second strike seems like a weird thing to get hung up on. Picking this nit feels like a deflection because what Eliezer said in the TIME article is still entirely deranged and outside international norms.
And emotionally, I feel really, really uncomfortable. Like, sort of dread in stomach uncomfortable.
So I disagree with this, but, maybe want to step back a sec, because, like, yeah the situation is pretty scary. Whether you think AI extinction is imminent, or that Eliezer is catastrophizing and AI’s not really a big deal, or AI is a big deal but you think Eliezer’s writing is making things worse, like, any way you slice it something uncomfortable is going on.
I’m very much not asking you to be okay with provoking a nuclear second strike. Nuclear war is hella scary! If you don’t think AI is dangerous, or you don’t think a global moratorium is a good solution, then yeah, this totally makes sense to be scared by. And even if you think (as I do), that a global moratorium that is actually enforced is a good idea, the possible consequences are still really scary and not to be taken lightly.
I also didn’t particularly object to most of you earlier comments here (I think I disagree, but I think it’s a kinda reasonable take. Getting into that doesn’t seem like the point)
But I do think there are really important differences between regulating AIs the way we regulate nukes (which is what I think Eliezer is advocating), and proactively nuclear striking a country. They’re both extreme proposals, but I think it’s false to say Eliezer’s proposal is totally outside international norms. It doesn’t feel like a nitpick/hairsplit to ask someone to notice the difference between an international nuclear proliferation treaty (that other governments are pressured to sign), and a preemptive nuclear strike. The latter is orders of magnitude more alarming. (I claim this is a very reasonable analogy for what Eliezer is arguing)
That seems mostly like you don’t feel (at least on a gut level) that a rogue GPU cluster in an world where there’s an international coalition banning them is literally worse than a (say) 20% risk of a full nuclear exchange.
If instead, it was a rogue nation credibly building a nuclear weapon which would ignite the atmosphere according to our best physics, would you still feel like it was deranged to suggest that we should stop it from being built even at the risk of a conventional nuclear war? (And still only as a final resort, after all other options have been exhausted.)
I can certainly sympathize with the whole dread in the stomach thing about all of this, at least.
I think this generalizes to more than LeCun. Screencaps of Yudkowsky’s Genocide the Borderers Facebook post still circulated around right wing social media in response to mentions of him for years, which makes forming any large coalition rather difficult. Would you trust someone who posted that with power over your future if you were a Borderer or had values similar to them?
(Or at least it was the goto post until Yudkowsky posted that infanticide up to 18 months wasn’t bad in response to a Caplan poll. Now that’s the post used to dismiss anything Yudkowsky says.)
What is this?
This Facebook post.
Reading it again almost 7 years later, it’s just so fractaly bad. There are people out there with guns, while the proposed technology to CRISPR a flu that gene changes people’s genes is science fiction so they top frame is nonsense. The actual viral payload, if such a thing could exist, would be genocide of a people (no you do not need to kill people to be genocide, this is still a central example). The idea wouldn’t work for so many reasons: a) peoples are a genetic distribution cluster instead of a set of Gene A, Gene B, Gene C; b) we don’t know all of these genes; c) in other contexts, Yudkowsky’s big idea is the orthogonality thesis so focusing on making his outgroup smarter is sort of weird; d) actually, the minimum message length of this virus would be unwieldy even if we knew all of the genes to target to the point where I don’t know whether this would be feasible even if we had viruses that could do small gene edits; and of course, e) this is all a cheap shot where he’s calling for genocide over partisan politics which we can now clearly say: the Trump presidency was not a thing to call for a genocide of his voters over.
(In retrospect (and with the knowledge that these sorts of statements are always narrativizing a more complex past), this post was roughly the inflection point where I went gradually started moving from “Yudkowsky is a genius who is one of the few people thinking about the world’s biggest problems” to “lol, what’s Big Yud catastrophizing about today?” First seeing that he was wrong about some things meant that it was easier to think critically about other things he said, and here we are today, but that’s dragging the conversation in a very different direction than your OP.)
(This is basically nitpicks)
A central example, really?
When I think of genocide, killing people is definitely what comes to mind. I agree that’s not necessary, but wikipedia says:
I don’t think it’s centrally any of those actions, or centrally targeted at any of those groups.
Which isn’t to say you can’t call it genocide, but I really don’t think it’s a central example.
This doesn’t seem weird to me. I don’t think the orthogonality thesis is true in humans (i.e. I think smarter humans tend to be more value aligned with me); and sometimes making non-value-aligned agents smarter is good for you (I’d rather play iterated prisoner’s dilemma with someone smart enough to play tit-for-tat than someone who can only choose between being CooperateBot or DefectBot).
I was going to write something saying “no actually we have the word genocide to describe the destruction of a peoples,” but walked away because I didn’t think that’d be a productive argument for either of us. But after sleeping on it, I want to respond to your other point:
My actual experience over the last decade is that some form of the above statement isn’t true. As a large human model trained on decades of interaction, my immediate response to querying my own next experience predictor in situations around interacting with smarter humans is: no strong correlation with my values and will defect unless there’s a very strong enforcement mechanism (especially in finance, business and management). (Presumably because in our society, most games aren’t iterated—or if they are iterated are closer to the dictator game instead of the prisoner’s dilemma—but I’m very uncertain about causes and am much more worried about previous observed outputs.)
I suspect that this isn’t going to be convincing to you because I’m giving you the output of a fuzzy statistical model instead of giving you a logical verbalized step by step argument. But the deeper crux is that I believe “The Rationalists” heavily over-weigh the second and under-weigh the first, when the first is a much more reliable source of information: it was generated by entanglement with reality in a way that mere arguments aren’t.
And I suspect that’s a large part of the reason why we—and I include myself with the Rationalists at that point in time—were blindsided by deep learning and connectionism winning: we expected intelligence to require some sort of symbolic reasoning and focusing on explicit utility functions and formal decision theory and maximizing things...and none of that seems even relevant to the actual intelligences we’ve made, which are doing fuzzy statistical learning on their training sets, arguably, just the way we are.
So I mostly don’t disagree with what you say about fuzzy statistical models versus step by step arguments. But also, what you said is indeed not very convincing to me, I guess in part because it’s not like my “I think smarter humans tend to be more value aligned with me” was the output of a step by step argument either. So when the output of your fuzzy statistical model clashes with the output of my fuzzy statistical model, it’s hardly surprising that I don’t just discard my own output and replace it with yours.
I’m also not simply discarding yours, but there’s not loads I can do with it as-is—like, you’ve given me the output of your fuzzy statistical model, but I still don’t have access to the model itself. I think if we cared enough to explore this question in more depth (which I probably don’t, but this meta thread is interesting) we’d need to ask things like “what exactly have we observed”, “can we find specific situations where we anticipate different things”, “do we have reason to trust one person’s fuzzy statistical models over another”, “are we even talking about the same thing here”.
If I had to put down my own inflection point in where I started getting worried about Yudkowsky’s epistemics and his public statements around AI risk, it would be the Time article. It showed to me 2 problems:
Yudkowsky has a big problem with overconfidence, and in general made many statements on the Time article that are misleading at best, and the general public likely wouldn’t know the statements are misleading.
Yudkowsky is terrible at PR, and generally is unable to talk about AI risk without polarizing people. Given that AI risk is thankfully mostly unpolarized, and outside of politics, I am getting concerned that Yudkowsky is a terrible public speaker/communicator on AI risk, even worse than some AI protests.
Edit: I sort of retract my statement. While I still think Eliezer is veering dangerously close to hoping for warfare and possible mass deaths over GPU clusters, I do retract the specific claim of Eliezer advocating nukes. It was instead on a second reading airstrikes and acts of war, but no claims of nuking other countries. I misremembered the actual claims made in the Time article.
(edit: I see Noosphere has since edited his comment, which seems good, but, leaving this up for posterity)
He did not call for nuclear strikes on AI centers, and while I think this was an understandable thing to misread him on initially by this point we’ve had a whole bunch of discussions about it and you have no excuse to continue spreading falsehoods about what he said.
I think there are reasonable things to disagree with Eliezer on and reasonable things to argue about his media presence, but please stop lying.
This is kind of the point where I despair about LessWrong and the rationalist community.
While I agree that he did not call for nuclear first strikes on AI centers, he said:
and
Asking us to be OK with provoking a nuclear second strike by attacking a nation that is not actually a signatory to an international agreement banning building gpu clusters that’s building a gpu cluster is actually still bad, and whether the nukes fly as part of the first strike or the retaliatory second strike seems like a weird thing to get hung up on. Picking this nit feels like a deflection because what Eliezer said in the TIME article is still entirely deranged and outside international norms.
And emotionally, I feel really, really uncomfortable. Like, sort of dread in stomach uncomfortable.
So I disagree with this, but, maybe want to step back a sec, because, like, yeah the situation is pretty scary. Whether you think AI extinction is imminent, or that Eliezer is catastrophizing and AI’s not really a big deal, or AI is a big deal but you think Eliezer’s writing is making things worse, like, any way you slice it something uncomfortable is going on.
I’m very much not asking you to be okay with provoking a nuclear second strike. Nuclear war is hella scary! If you don’t think AI is dangerous, or you don’t think a global moratorium is a good solution, then yeah, this totally makes sense to be scared by. And even if you think (as I do), that a global moratorium that is actually enforced is a good idea, the possible consequences are still really scary and not to be taken lightly.
I also didn’t particularly object to most of you earlier comments here (I think I disagree, but I think it’s a kinda reasonable take. Getting into that doesn’t seem like the point)
But I do think there are really important differences between regulating AIs the way we regulate nukes (which is what I think Eliezer is advocating), and proactively nuclear striking a country. They’re both extreme proposals, but I think it’s false to say Eliezer’s proposal is totally outside international norms. It doesn’t feel like a nitpick/hairsplit to ask someone to notice the difference between an international nuclear proliferation treaty (that other governments are pressured to sign), and a preemptive nuclear strike. The latter is orders of magnitude more alarming. (I claim this is a very reasonable analogy for what Eliezer is arguing)
That seems mostly like you don’t feel (at least on a gut level) that a rogue GPU cluster in an world where there’s an international coalition banning them is literally worse than a (say) 20% risk of a full nuclear exchange.
If instead, it was a rogue nation credibly building a nuclear weapon which would ignite the atmosphere according to our best physics, would you still feel like it was deranged to suggest that we should stop it from being built even at the risk of a conventional nuclear war? (And still only as a final resort, after all other options have been exhausted.)
I can certainly sympathize with the whole dread in the stomach thing about all of this, at least.