Aristotle seems (though he’s vague on this) to be thinking in terms of fundamental attributes, while I’m thinking in terms of present capacity, which can be reduced by external interventions such as schooling.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
This is very interesting to me! I’d like to hear more about how the two group’s behavior looks diff, and also your thoughts on what’s the difference that makes the difference, what are the pieces of “being brought up to go to college” that lead to one class of reactions?
I have talked to Vassar, while he has a lot of “explicit control over conversations” which could be called charisma, I’d hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I’ve met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people’s identity (“I’m an EA person, thus I’m a good person doing important work”). Two anecdotes to illustrate this: - I’d recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we’re both self-declared rationalists!) because I’d realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. - I’d had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: “Only if it works on the alignment problem, everything else is irrelevant to me”.
Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn’t disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I’ve read “I’m an evil person” from multiple people relating their “Vassar-psychosis” experience. To me it’s very easy to see how one could get there if the defining part of the identity is “I’m a good person because I work on EA/Alignment” + “EA/Aligment is a scam” arguments. It also makes Vassar look like a genius (God), because “why wouldn’t the rest of the rationalists see the arguments”, while it’s really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.
This would probably predict, that the people experiencing “Vassar-psychosis” would’ve a stronger-than-average constructed identity based on EA/CFAR/MIRI?
What are your or Vassar’s arguments against EA or AI alignment? This is only tangential to your point, but I’d like to know about it if EA and AI alignment are not important.
The general argument is that EA’s are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA’s. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen.
EA’s created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn’t address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it’s problems and funded work that’s in less conflict with the establishment. There’s nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.
If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.
AI alignment is important but just because one “works on AI risk” doesn’t mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to decrease AI risk makes it hard to clearly reason about whether one’s actions actually do. OpenAI would be an organization where people who see themselves as “working on AI alignment” work and you can look at the recent discussion that whether or not that work reduces or increases actual risk is in open debate.
In a world where human alignment doesn’t work to prevent dangerous gain of function experiments from happening thinking about AI alignment instead of the problem of human alignment where it’s easier to get feedback might be the wrong strategic focus.
Did Vassar argue that existing EA organizations weren’t doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?
(a) EA orgs aren’t doing what they say they’re doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it’s hard to get organizations to do what they say they do
(b) Utilitarianism isn’t a form of ethics, it’s still necessary to have principles, as in deontology or two-level consequentialism
(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn’t well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved
(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact
If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him.
The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.
Vassar’s action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.
You might see his thesis is that “effective” in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn’t delegate his judgements of what’s effective and thus warrents support to other people.
I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)
I’m getting a bit pedantic, but I wouldn’t gloss this as “CEA used legal threats to cover up Leverage related information”. Partly because the original bit is vague, but also because “cover up” implies that the goal is to hide information.
For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.
In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn’t mention that the Pareto Fellowship was largely run by Leverage.
On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers.”
That does look to me like hiding information about the cooperation between Leverage and CEA.
I do think that publically presuming that people who hide information have something to hide is useful. If there’s nothing to hide I’d love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page.
Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage’s history with Effective Altruism. I think this was bad, and continues to be bad.
My initial thought on reading this was ‘this seems obviously bad’, and I assumed this was done to shield CEA from reputational risk.
Thinking about it more, I could imagine an epistemic state I’d be much more sympathetic to: ‘We suspect Leverage is a dangerous cult, but we don’t have enough shareable evidence to make that case convincingly to others, or we aren’t sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don’t feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can’t say anything we expect others to find convincing. So we’ll have to just steer clear of the topic for now.’
Still seems better to just not address the subject if you don’t want to give a fully accurate account of it. You don’t have to give talks on the history of EA!
I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like “Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”.
“Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”
That has the collary: “We don’t expect EA’s to care enough about the truth/being transparent that this is a huge reputational risk for us.”
It does look weird to me that CEA doesn’t include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:
Hi CEA,
On https://www.centreforeffectivealtruism.org/our-mistakes I see “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable.”
Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don’t actually know, since people tend to get cagey when the topic comes up.
I talked with Geoff and according to him there’s no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA’s.
Huh, that’s surprising, if by that he means “no contracts between anyone currently at Leverage and anyone at CEA”. I currently still think it’s the case, though I also don’t see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees?
What he said is compatible with Ex-CEA people still being bound by the NDA’s they signed they were at CEA. I don’t think anything happened that releases ex-CEA people from NDAs.
The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn’t unilaterally lift the settlement contract.
Public pressure on CEA seems to be necessary to get the information out in the open.
Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn’t get much enjoyment out of insight porn either, so that emotional impact isn’t there.
There’s probably also an element that plenty of people who can normally follow an intellectual conversation can’t keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there’s an idea overload that prevents people from critically thinking through some of the ideas.
If you have a person who hasn’t gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that.
From meeting Vassar, I don’t feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff).
This seems mostly right; they’re more likely to think “I don’t understand a lot of these ideas, I’ll have to think about this for a while” or “I don’t understand a lot of these ideas, he must be pretty smart and that’s kinda cool” than to feel invalidated by this and try to submit to him in lieu of understanding.
The people I know who weren’t brought up to go to college have more experience navigating concrete threats and dangers, which can’t be avoided through conformity, since the system isn’t set up to take care of people like them. They have to know what’s going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.
In general this means that they’re much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
This is interesting to me because I was brought up to go to college, but I didn’t take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god.
It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.
Aristotle seems (though he’s vague on this) to be thinking in terms of fundamental attributes, while I’m thinking in terms of present capacity, which can be reduced by external interventions such as schooling.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
*As far as I know I didn’t know any such people before 2020; it’s very easy for members of the educated class to mistake our bubble for statistical normality.
This is very interesting to me! I’d like to hear more about how the two group’s behavior looks diff, and also your thoughts on what’s the difference that makes the difference, what are the pieces of “being brought up to go to college” that lead to one class of reactions?
I have talked to Vassar, while he has a lot of “explicit control over conversations” which could be called charisma, I’d hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I’ve met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people’s identity (“I’m an EA person, thus I’m a good person doing important work”). Two anecdotes to illustrate this:
- I’d recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we’re both self-declared rationalists!) because I’d realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity.
- I’d had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: “Only if it works on the alignment problem, everything else is irrelevant to me”.
Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn’t disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I’ve read “I’m an evil person” from multiple people relating their “Vassar-psychosis” experience. To me it’s very easy to see how one could get there if the defining part of the identity is “I’m a good person because I work on EA/Alignment” + “EA/Aligment is a scam” arguments.
It also makes Vassar look like a genius (God), because “why wouldn’t the rest of the rationalists see the arguments”, while it’s really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.
This would probably predict, that the people experiencing “Vassar-psychosis” would’ve a stronger-than-average constructed identity based on EA/CFAR/MIRI?
What are your or Vassar’s arguments against EA or AI alignment? This is only tangential to your point, but I’d like to know about it if EA and AI alignment are not important.
The general argument is that EA’s are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA’s. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen.
EA’s created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn’t address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it’s problems and funded work that’s in less conflict with the establishment. There’s nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.
If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.
AI alignment is important but just because one “works on AI risk” doesn’t mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to decrease AI risk makes it hard to clearly reason about whether one’s actions actually do. OpenAI would be an organization where people who see themselves as “working on AI alignment” work and you can look at the recent discussion that whether or not that work reduces or increases actual risk is in open debate.
In a world where human alignment doesn’t work to prevent dangerous gain of function experiments from happening thinking about AI alignment instead of the problem of human alignment where it’s easier to get feedback might be the wrong strategic focus.
Did Vassar argue that existing EA organizations weren’t doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?
He argued
(a) EA orgs aren’t doing what they say they’re doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it’s hard to get organizations to do what they say they do
(b) Utilitarianism isn’t a form of ethics, it’s still necessary to have principles, as in deontology or two-level consequentialism
(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn’t well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved
(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact
If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him.
The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.
Vassar’s action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.
You might see his thesis is that “effective” in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn’t delegate his judgements of what’s effective and thus warrents support to other people.
Link? I’m not finding it
https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=zqcynfzfKma6QKMK9
I think what you’re pointing to is:
I’m getting a bit pedantic, but I wouldn’t gloss this as “CEA used legal threats to cover up Leverage related information”. Partly because the original bit is vague, but also because “cover up” implies that the goal is to hide information.
For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.
In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn’t mention that the Pareto Fellowship was largely run by Leverage.
On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers.”
That does look to me like hiding information about the cooperation between Leverage and CEA.
I do think that publically presuming that people who hide information have something to hide is useful. If there’s nothing to hide I’d love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page.
Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage’s history with Effective Altruism. I think this was bad, and continues to be bad.
My initial thought on reading this was ‘this seems obviously bad’, and I assumed this was done to shield CEA from reputational risk.
Thinking about it more, I could imagine an epistemic state I’d be much more sympathetic to: ‘We suspect Leverage is a dangerous cult, but we don’t have enough shareable evidence to make that case convincingly to others, or we aren’t sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don’t feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can’t say anything we expect others to find convincing. So we’ll have to just steer clear of the topic for now.’
Still seems better to just not address the subject if you don’t want to give a fully accurate account of it. You don’t have to give talks on the history of EA!
I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like “Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”.
That has the collary: “We don’t expect EA’s to care enough about the truth/being transparent that this is a huge reputational risk for us.”
It does look weird to me that CEA doesn’t include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:
They wrote back, linking me to https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QcdhTjqGcSc99sNN
(“we’re working on a couple of updates to the mistakes page, including about this”)
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don’t actually know, since people tend to get cagey when the topic comes up.
I talked with Geoff and according to him there’s no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA’s.
Huh, that’s surprising, if by that he means “no contracts between anyone currently at Leverage and anyone at CEA”. I currently still think it’s the case, though I also don’t see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees?
What he said is compatible with Ex-CEA people still being bound by the NDA’s they signed they were at CEA. I don’t think anything happened that releases ex-CEA people from NDAs.
The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn’t unilaterally lift the settlement contract.
Public pressure on CEA seems to be necessary to get the information out in the open.
Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn’t get much enjoyment out of insight porn either, so that emotional impact isn’t there.
There’s probably also an element that plenty of people who can normally follow an intellectual conversation can’t keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there’s an idea overload that prevents people from critically thinking through some of the ideas.
If you have a person who hasn’t gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that.
From meeting Vassar, I don’t feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff).
This seems mostly right; they’re more likely to think “I don’t understand a lot of these ideas, I’ll have to think about this for a while” or “I don’t understand a lot of these ideas, he must be pretty smart and that’s kinda cool” than to feel invalidated by this and try to submit to him in lieu of understanding.
The people I know who weren’t brought up to go to college have more experience navigating concrete threats and dangers, which can’t be avoided through conformity, since the system isn’t set up to take care of people like them. They have to know what’s going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.
In general this means that they’re much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
This is interesting to me because I was brought up to go to college, but I didn’t take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god.
It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.