I’ll tell you that one of my brothers (who I greatly respect) has decided not to be concerned about AGI risks specifically because he views EY as being a very respected “alarmist” in the field (which is basically correct), and also views EY as giving off extremely “culty” and “obviously wrong” vibes (with Roko’s Basilisk and EY’s privacy around the AI boxing results being the main examples given), leading him to conclude that it’s simply not worth engaging with the community (and their arguments) in the first place. I wouldn’t personally engage with what I believe to be a doomsday cult (even if they claim that the risk of ignoring them is astronomically high), so I really can’t blame him.
I’m also aware of an individual who has enormous cultural influence, and was interested in rationalism, but heard from an unnamed researcher at Google that the rationalist movement is associated with the alt-right, so they didn’t bother looking further. (Yes, that’s an incorrect statement, but came from the widespread [possibly correct?] belief that Peter Theil is both alt-right and has/had close ties with many prominent rationalists.) This indicates a general lack of control of the narrative surrounding the movement, and likely has directly led to needlessly antagonistic relationships.
The problems are well known. The mystery is why the community doesn’t implement obvious solutions. Hiring PR people is an obvious solution. There’s a posting somewhere in which Anna Salamon argues that there is some sort of moral hazard involved in professional PR, but never explains why, and everyone agrees with her anyway.
If the community really and literally is about saving the world, then having a constant stream of people who are put off, or even becoming enemies is incrementally making the world more likely to be destroyed. So surely it’s an important problem to solve? Yet the community doesn’t even like discussing it. It’s as if maintaining some sort of purity, or some sort of impression that you don’t make mistakes is more important than saving the world.
If the community really and literally is about saving the world, then having a constant stream of people who are put off, or even becoming enemies is incrementally making the world more likely to be destroyed. So surely it’s an important problem to solve? Yet the community doesn’t even like discussing it. It’s as if maintaining some sort of purity, or some sort of impression that you don’t make mistakes is more important than saving the world.
I think there are two issues.
First, some of the ‘necessary to save the world’ things might make enemies. If it’s the case that Bob really wants there to be a giant explosion, and you think giant explosions might kill everyone, you and Bob are going to disagree about what to do, and Bob existing in the same information environment as you will constrain your ability to share your preferences and collect allies without making Bob an enemy.
Second, this isn’t an issue where we can stop thinking, and thus we need to continue doing things that help us think, even if those things have costs. In contrast, in a situation where you know what plan you need to implement, you can now drop lots of your ability to think in order to coordinate on implementing that plan. [Like, a lot of the “there are too much PR in EA” complaints were specifically about situations where people were overstating the effectiveness of particular interventions, which seemed pretty poisonous to the project of comparing interventions, which was one of the core goals of EA, rather than just ‘money moved’ or ‘number of people pledging’ or so on.]
That said, I agree that this seems important to make progress on; this is one of the reasons I worked in communications roles, this is one of the reasons I try to be as polite as I am, this is why I’ve tried to make my presentation more adaptable instead of being more willing to write people off.
First, some of the ‘necessary to save the world’ things might make enemies. If it’s the case that Bob really wants there to be a giant explosion, and you think giant explosions might kill everyone, you and Bob are going to disagree about what to do, and Bob existing in the same information environment as you will constrain your ability to share your preferences and collect allies without making Bob an enemy.
So...that’s a metaphor for “telling people who like building AIs to stop building AIs pisses them off and turns them into enemies”. Which it might, but how often does that happen? Your prominent enemies aren’t in that category , as far as I can see. David Gerard,for instance, was alienated by a race/IQ discussion. So good PR might consist of banning race/IQ.
Also, consider the possibility that people who know how to build AIs know more than you, so it’s less a question of their being enemies , and more one of their being people you can learn from.
I don’t know how public various details are, but my impression is that this was a decent description of the EY—Dario Amodei relationship (and presumably still is?), tho I think personality clashes are also a part of that.
Also, consider the possibility that people who know how to build AIs know more than you, so it’s less a question of their being enemies , and more one of their being people you can learn from.
I mean, obviously they know more about some things and less about others? Like, virologists doing gain of function research are also people who know more than me, and I could view them as people I could learn from. Would that advance or hinder my goals?
If you are under some kind of misapprehension about the nature of their work, it would help. And you don’t know that you are not under a misapprehension, because they are the experts, not you. So you need to talk to them anyway. You might believe that you understand the field flawlessly, but you dont know until someone checks your work.
That said, I agree that this seems important to make progress on; this is one of the reasons I worked in roles, this is one of the reasons I try to be as polite as I am, this is why I’ve tried to make my presentation more adaptable instead of being more willing to write people off.
It is not enough to say nice things: other representatives must be prevented from saying nasty things.
For any statement one can make, there will be people “alienated” (=offended?) by it.
David Gerard was alienated by a race/IQ discussion and you think that should’ve been avoided.
But someone was surely equally alienated by discussions of religion, evolution, economics, education and our ability to usefully define words.
Do we value David Gerard so far above any given creationist, that we should hire a PR department to cater to him and people like him specifically?
There is an ongoing effort to avoid overtly political topics (Politics is the mind-killer!) - but this effort is doomed beyond a certain threshold, since everything is political to some extent. Or to some people.
To me, a concerted PR effort on part of all prominent representatives to never say anything “nasty” would be alienating. I don’t think a community even somewhat dedicated to “radical” honesty could abide a PR department—or vice versa.
TL;DR—LessWrong has no PR department, LessWrong needs no PR department!
For any statement one can make, there will be people “alienated” (=offended?) by it
If you also assume that nothing available except of perfection, that’s a fully general argument against PR, not just against the possibility of LW/MIRI having good PR.
If you don’t assume that, LW/MIRI can have good PR, by avoiding just the most significant bad PR. Disliking racism isn’t some weird idiosyncratic thing that only Gerard has.
It requires you to filter what you publicly and officially say. “You”, plural, the collective, can speak as freely as you like …in private. But if you, individually, want to be able to say anything you like to anyone, you had better accept the consequences.
“The mystery is why the community doesn’t implement obvious solutions. Hiring PR people is an obvious solution. There’s a posting somewhere in which Anna Salamon argues that there is some sort of moral hazard involved in professional PR, but never explains why, and everyone agrees with her anyway.”
“”You”, plural, the collective, can speak as freely as you like …in private.”
Suppose a large part of the community wants to speak as freely as it likes in public, and the mystery is solved.
We even managed to touch upon the moral hazard involved in professional PR—insofar as it is a filter between what you believe and what you say publicly.
None of these seem to reflect on EY unless you would expect him to be able to predict that a journalist would write an incoherent almost maximally inaccurate description of an event where he criticized an idea for being implausible then banned its discussion for being off-topic/pointlessly disruptive to something like two people or that his clearly written rationale for not releasing the transcripts for the ai box experiments would be interpreted as a recruiting tool for the only cult that requires no contributions to be a part of, doesn’t promise its members salvation/supernatural powers, has no formal hierarchy and is based on a central part of economics.
I would not expect EY to have predicted that himself, given his background. If, however, he either had studied PR deeply or had consulted with a domain expert before posting, then I would have totally expected that result to be predicted with some significant likelihood. Remember, optimally good rationalists should win, and be able to anticipate social dynamics. In this case EY fell into a social trap he didn’t even know existed, so again, I do not blame him personally, but that does not negate the fact that he’s historically not been very good at anticipating that sort of thing, due to lack of training/experience/intuition in that field.
I’m fairly confident that at least regarding the Roko’s Basilisk disaster, I would have been able to predict something close to what actually happened if I had seen his comment before he posted it. (This would have been primarily due to pattern matching between the post and known instances of the Striezand Effect, as well as some amount of hard-to-formally-explain intuition that EY’s wording would invoke strong negative emotions in some groups, even if he hadn’t taken any action. Studying “ratio’d” tweets can help give you a sense for this, if you want to practice that admittedly very niche skill). I’m not saying this to imply that I’m a better rationalist than EY (I’m not), merely to say that EY—and the rationalist movement generally—hasn’t focused on honing the skillset necessary to excel at PR, which has sometimes been to our collective detriment.
The question is whether people who prioritize social-position/status-based arguments over actual reality were going to contribute anything meaningful to begin with.
The rationalist community has been built on, among other things, the recognition that human species is systematically broken when it comes to epistemic rationality. Why think that someone who fails this deeply wouldn’t continue failing at epistemic rationality at every step even once they’ve already joined?
Why think that someone who fails this deeply wouldn’t continue failing at epistemic rationality at every step even once they’ve already joined?
I think making the assumption that anyone who isn’t in our community is failing to think rationally is itself not great epistemics. It’s not irrational at all to refrain from engaging with the ideas of a community you believe to be vaguely insane. After all, I suspect you haven’t looked all that deeply into the accuracy of the views of the Church of Scientology, and that’s not a failure on your part, since there’s little chance you’ll gain much of value for your time if you did. There are many, many, many groups out there who sound intelligent at first glance, but when seriously engaged with fall apart. Likewise, there are those groups which sound insane at first, but actually have deep truths to teach (I’d place some forms of Zen Buddhism under this category). It makes a lot of sense to trust your intuition on this sort of thing, if you don’t want to get sucked into cults or time-sinks.
I’ll tell you that one of my brothers (who I greatly respect) has decided not to be concerned about AGI risks specifically because he views EY as being a very respected “alarmist” in the field (which is basically correct), and also views EY as giving off extremely “culty” and “obviously wrong” vibes (with Roko’s Basilisk and EY’s privacy around the AI boxing results being the main examples given), leading him to conclude that it’s simply not worth engaging with the community (and their arguments) in the first place. I wouldn’t personally engage with what I believe to be a doomsday cult (even if they claim that the risk of ignoring them is astronomically high), so I really can’t blame him.
I’m also aware of an individual who has enormous cultural influence, and was interested in rationalism, but heard from an unnamed researcher at Google that the rationalist movement is associated with the alt-right, so they didn’t bother looking further. (Yes, that’s an incorrect statement, but came from the widespread [possibly correct?] belief that Peter Theil is both alt-right and has/had close ties with many prominent rationalists.) This indicates a general lack of control of the narrative surrounding the movement, and likely has directly led to needlessly antagonistic relationships.
That’s putting it mildly.
The problems are well known. The mystery is why the community doesn’t implement obvious solutions. Hiring PR people is an obvious solution. There’s a posting somewhere in which Anna Salamon argues that there is some sort of moral hazard involved in professional PR, but never explains why, and everyone agrees with her anyway.
If the community really and literally is about saving the world, then having a constant stream of people who are put off, or even becoming enemies is incrementally making the world more likely to be destroyed. So surely it’s an important problem to solve? Yet the community doesn’t even like discussing it. It’s as if maintaining some sort of purity, or some sort of impression that you don’t make mistakes is more important than saving the world.
Presumably you mean this post.
I think there are two issues.
First, some of the ‘necessary to save the world’ things might make enemies. If it’s the case that Bob really wants there to be a giant explosion, and you think giant explosions might kill everyone, you and Bob are going to disagree about what to do, and Bob existing in the same information environment as you will constrain your ability to share your preferences and collect allies without making Bob an enemy.
Second, this isn’t an issue where we can stop thinking, and thus we need to continue doing things that help us think, even if those things have costs. In contrast, in a situation where you know what plan you need to implement, you can now drop lots of your ability to think in order to coordinate on implementing that plan. [Like, a lot of the “there are too much PR in EA” complaints were specifically about situations where people were overstating the effectiveness of particular interventions, which seemed pretty poisonous to the project of comparing interventions, which was one of the core goals of EA, rather than just ‘money moved’ or ‘number of people pledging’ or so on.]
That said, I agree that this seems important to make progress on; this is one of the reasons I worked in communications roles, this is one of the reasons I try to be as polite as I am, this is why I’ve tried to make my presentation more adaptable instead of being more willing to write people off.
So...that’s a metaphor for “telling people who like building AIs to stop building AIs pisses them off and turns them into enemies”. Which it might, but how often does that happen? Your prominent enemies aren’t in that category , as far as I can see. David Gerard,for instance, was alienated by a race/IQ discussion. So good PR might consist of banning race/IQ.
Also, consider the possibility that people who know how to build AIs know more than you, so it’s less a question of their being enemies , and more one of their being people you can learn from.
I don’t know how public various details are, but my impression is that this was a decent description of the EY—Dario Amodei relationship (and presumably still is?), tho I think personality clashes are also a part of that.
I mean, obviously they know more about some things and less about others? Like, virologists doing gain of function research are also people who know more than me, and I could view them as people I could learn from. Would that advance or hinder my goals?
If you are under some kind of misapprehension about the nature of their work, it would help. And you don’t know that you are not under a misapprehension, because they are the experts, not you. So you need to talk to them anyway. You might believe that you understand the field flawlessly, but you dont know until someone checks your work.
It is not enough to say nice things: other representatives must be prevented from saying nasty things.
For any statement one can make, there will be people “alienated” (=offended?) by it.
David Gerard was alienated by a race/IQ discussion and you think that should’ve been avoided.
But someone was surely equally alienated by discussions of religion, evolution, economics, education and our ability to usefully define words.
Do we value David Gerard so far above any given creationist, that we should hire a PR department to cater to him and people like him specifically?
There is an ongoing effort to avoid overtly political topics (Politics is the mind-killer!) - but this effort is doomed beyond a certain threshold, since everything is political to some extent. Or to some people.
To me, a concerted PR effort on part of all prominent representatives to never say anything “nasty” would be alienating. I don’t think a community even somewhat dedicated to “radical” honesty could abide a PR department—or vice versa.
TL;DR—LessWrong has no PR department, LessWrong needs no PR department!
If you also assume that nothing available except of perfection, that’s a fully general argument against PR, not just against the possibility of LW/MIRI having good PR.
If you don’t assume that, LW/MIRI can have good PR, by avoiding just the most significant bad PR. Disliking racism isn’t some weird idiosyncratic thing that only Gerard has.
The level of PR you aim for puts an upper limit to how much “radical” honesty you can have.
If you aim for perfect PR, you can have 0 honesty.
If you aim for perfect honesty, you can have no PR. lesswrong doesn’t go that far, by a long shot—even without a PR team present.
Most organization do not aim for honesty at all.
The question is where do we draw the line.
Which brings us to “Disliking racism isn’t some weird idiosyncratic thing that only Gerard has.”
From what I understand, Gerard left because he doesn’t like discussions about race/IQ.
Which is not the same thing as racism.
I, personally, don’t want lesswrong to cater to people who can not tolerate a discussion.
honesty=/=frankness. Good PR does not require you to lie.
Semantics.
Good PR requires you to put a filter between what you think is true and what you say.
It requires you to filter what you publicly and officially say. “You”, plural, the collective, can speak as freely as you like …in private. But if you, individually, want to be able to say anything you like to anyone, you had better accept the consequences.
“The mystery is why the community doesn’t implement obvious solutions. Hiring PR people is an obvious solution. There’s a posting somewhere in which Anna Salamon argues that there is some sort of moral hazard involved in professional PR, but never explains why, and everyone agrees with her anyway.”
“”You”, plural, the collective, can speak as freely as you like …in private.”
Suppose a large part of the community wants to speak as freely as it likes in public, and the mystery is solved.
We even managed to touch upon the moral hazard involved in professional PR—insofar as it is a filter between what you believe and what you say publicly.
Theres a hazard in having no filters, as well. One thing being bad doesn’t make another good.
None of these seem to reflect on EY unless you would expect him to be able to predict that a journalist would write an incoherent almost maximally inaccurate description of an event where he criticized an idea for being implausible then banned its discussion for being off-topic/pointlessly disruptive to something like two people or that his clearly written rationale for not releasing the transcripts for the ai box experiments would be interpreted as a recruiting tool for the only cult that requires no contributions to be a part of, doesn’t promise its members salvation/supernatural powers, has no formal hierarchy and is based on a central part of economics.
I would not expect EY to have predicted that himself, given his background. If, however, he either had studied PR deeply or had consulted with a domain expert before posting, then I would have totally expected that result to be predicted with some significant likelihood. Remember, optimally good rationalists should win, and be able to anticipate social dynamics. In this case EY fell into a social trap he didn’t even know existed, so again, I do not blame him personally, but that does not negate the fact that he’s historically not been very good at anticipating that sort of thing, due to lack of training/experience/intuition in that field. I’m fairly confident that at least regarding the Roko’s Basilisk disaster, I would have been able to predict something close to what actually happened if I had seen his comment before he posted it. (This would have been primarily due to pattern matching between the post and known instances of the Striezand Effect, as well as some amount of hard-to-formally-explain intuition that EY’s wording would invoke strong negative emotions in some groups, even if he hadn’t taken any action. Studying “ratio’d” tweets can help give you a sense for this, if you want to practice that admittedly very niche skill). I’m not saying this to imply that I’m a better rationalist than EY (I’m not), merely to say that EY—and the rationalist movement generally—hasn’t focused on honing the skillset necessary to excel at PR, which has sometimes been to our collective detriment.
The question is whether people who prioritize social-position/status-based arguments over actual reality were going to contribute anything meaningful to begin with.
The rationalist community has been built on, among other things, the recognition that human species is systematically broken when it comes to epistemic rationality. Why think that someone who fails this deeply wouldn’t continue failing at epistemic rationality at every step even once they’ve already joined?
I think making the assumption that anyone who isn’t in our community is failing to think rationally is itself not great epistemics. It’s not irrational at all to refrain from engaging with the ideas of a community you believe to be vaguely insane. After all, I suspect you haven’t looked all that deeply into the accuracy of the views of the Church of Scientology, and that’s not a failure on your part, since there’s little chance you’ll gain much of value for your time if you did. There are many, many, many groups out there who sound intelligent at first glance, but when seriously engaged with fall apart. Likewise, there are those groups which sound insane at first, but actually have deep truths to teach (I’d place some forms of Zen Buddhism under this category). It makes a lot of sense to trust your intuition on this sort of thing, if you don’t want to get sucked into cults or time-sinks.
I didn’t talk about “anyone who isn’t in our community,” but about
It’s epistemically irrational if I’m implying the ideas are false and if this judgment isn’t born from interacting with the ideas themselves but with