He was accepted into the most recent EA Global conference and tried to raise funds for Intentional Insights there, with the benefit of the implied social proof. That’s what inspired Jeff Kaufman’s first post about him. In the resulting controversy multiple people told me that he’d been in various EA/Rationality venues and pushily trying to get people to do his thing, and no one had bothered to try to create common knowledge that there was a pattern of problematic behavior. But in the same controversy, some EA leaders argued that we were rushing to judgment too fast.
I think he got some volunteer hours out of the EA community, and he certainly got a lot of time from prominent EAs.
Likewise with ACE, a some individuals knew that they had misleading stuff up, and kept pointing it out to ACE staff in semi-public internet venues, the organization’s estimates and recommendations continued to be taken seriously in public for quite a while, the org got invited to EA leadership events, etc.
(Double counting caveat: one of the EA leaders defending Gleb was part of ACE at the time.)
He was accepted into the most recent EA Global conference and tried to raise funds for Intentional Insights there, with the benefit of the implied social proof.
I’m not sure what makes this such a pressing issue. While InIn may not be, strictly speaking, an “EA” organization, they do self-identify as promoters of “effective giving”. Many EA organizations are in fact expending resources on ‘outreach’ goals that seemingly do not differ in any meaningful way from InIn’s broad mission. Where InIn differ most markedly is in their methods; such as focusing much of their outreach on third-world countries where messages can be delivered most cheaply, and where the need for people to make effective career choices is that much starker, given the opportunity for “doing good” locally by addressing a vast amount of currently-neglected issues.
The article you link to doesn’t mention either “pattern” or “deception” in its text—at least, not in any relevant sense. It looks more like a laundry list of purported concerns; I’m ready to believe that some of these concerns might be more justified than others—but that still does not convince me that there’s a relevant ‘pattern’ here.
He was accepted into the most recent EA Global conference and tried to raise funds for Intentional Insights there, with the benefit of the implied social proof. That’s what inspired Jeff Kaufman’s first post about him. In the resulting controversy multiple people told me that he’d been in various EA/Rationality venues and pushily trying to get people to do his thing, and no one had bothered to try to create common knowledge that there was a pattern of problematic behavior. But in the same controversy, some EA leaders argued that we were rushing to judgment too fast.
I think he got some volunteer hours out of the EA community, and he certainly got a lot of time from prominent EAs.
Likewise with ACE, a some individuals knew that they had misleading stuff up, and kept pointing it out to ACE staff in semi-public internet venues, the organization’s estimates and recommendations continued to be taken seriously in public for quite a while, the org got invited to EA leadership events, etc.
(Double counting caveat: one of the EA leaders defending Gleb was part of ACE at the time.)
I’m not sure what makes this such a pressing issue. While InIn may not be, strictly speaking, an “EA” organization, they do self-identify as promoters of “effective giving”. Many EA organizations are in fact expending resources on ‘outreach’ goals that seemingly do not differ in any meaningful way from InIn’s broad mission. Where InIn differ most markedly is in their methods; such as focusing much of their outreach on third-world countries where messages can be delivered most cheaply, and where the need for people to make effective career choices is that much starker, given the opportunity for “doing good” locally by addressing a vast amount of currently-neglected issues.
Did you follow the link to the roundup of Gleb’s and InIn’s pattern of deception?
The article you link to doesn’t mention either “pattern” or “deception” in its text—at least, not in any relevant sense. It looks more like a laundry list of purported concerns; I’m ready to believe that some of these concerns might be more justified than others—but that still does not convince me that there’s a relevant ‘pattern’ here.