Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.
So, this seems deliberate.
Because high-psychoticism people are the ones who are most likely to understand what he has to say.
This isn’t nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn’t like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky’s writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they’re preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!
I mean, technically, yes. But in Yudkowsky and friends’ worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they’re going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?
There’s a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don’t have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.
If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you’d object to that targeting strategy even though they’d be able to make an argument structurally the same as your comment.
Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it’s even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.
In general this seems really expected and unobjectionable? “If I’m trying to convince people of X, I’m going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior”. This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.
I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?
If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn’t care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.
The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from “psychotic,” and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren’t already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.
See also: indexicality.
On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than “autism,” on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).
I wouldn’t find it objectionable. I’m not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.
Well, I don’t think it’s obviously objectionable, and I’d have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like “we’d all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we’re talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren’t generally either truth-tracking or good for them” seems plausible to me. But I think it’s obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.
I don’t have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as “susceptibility to invalid methods of persuasion”, which seems notably higher in the case of people with high “apocalypticism” than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high “psychoticism”.)
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it’s by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger’s-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
I question Vassar’s wisdom, if what you say is indeed true about his motives.
I question whether he’s got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he’s appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn’t know how to integrate.
I question how much work he’s done on his own shadow and whether it’s not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has ‘shadow stuff’ that he’s not seeing.
I don’t think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing.
Because high-psychoticism people are the ones who are most likely to understand what he has to say.
This isn’t nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn’t like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky’s writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they’re preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!
I mean, technically, yes. But in Yudkowsky and friends’ worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they’re going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?
There’s a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don’t have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.
If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you’d object to that targeting strategy even though they’d be able to make an argument structurally the same as your comment.
Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it’s even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.
In general this seems really expected and unobjectionable? “If I’m trying to convince people of X, I’m going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior”. This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.
I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?
If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn’t care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.
The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from “psychotic,” and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren’t already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.
See also: indexicality.
On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than “autism,” on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).
I wouldn’t find it objectionable. I’m not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.
Well, I don’t think it’s obviously objectionable, and I’d have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like “we’d all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we’re talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren’t generally either truth-tracking or good for them” seems plausible to me. But I think it’s obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.
I don’t have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as “susceptibility to invalid methods of persuasion”, which seems notably higher in the case of people with high “apocalypticism” than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high “psychoticism”.)
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it’s by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger’s-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
It might not be nefarious.
But it might also not be very wise.
I question Vassar’s wisdom, if what you say is indeed true about his motives.
I question whether he’s got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he’s appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn’t know how to integrate.
I question how much work he’s done on his own shadow and whether it’s not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has ‘shadow stuff’ that he’s not seeing.
I don’t think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing.