Hi! A quick note: I created the CEA Dashboard which is the 2nd link you reference. The data here hadn’t been updated since August 2024, and so was quite out of date at the time of your comment. I’ve now taken this dashboard down, since I think it’s overall more confusing than helpful for grokking the state of CEA’s work. We still intend to come back and update it within a few months.
Just to be clear on why / what’s going on:
I stopped updating the dashboard in August because I started getting busy with some other projects, and my manager & I decided to deprioritize this. (There are some manual steps needed to keep the data live).
I’ve now seen several people refer to that dashboard as a reference for how CEA is doing in ways I think are pretty misleading.
We (CEA) still intend to come back and fix this, and this is a good nudge to prioritize it.
Oh, huh, that seems very sad. Why would you do that? Please leave up the data that we have. I think it’s generally bad form to break links that people relied on. The data was accurate as far as I can tell until August 2024, and you linked to it yourself a bunch over the years, don’t just break all of those links.
I am pretty up-to-date with other EA metrics and I don’t really see how this would be misleading. You had a disclaimer at the top that I think gave all the relevant context. Let people make their own inferences, or add more context, but please don’t just take things down.
Unfortunately, archive.org doesn’t seem to have worked for that URL, so we can’t even rely on that to show the relevant data trends.
Edit: I’ll be honest, after thinking about it for longer, the only reason I can think of why you would take down the data is because it makes CEA and EA look less on an upwards trajectory. But this seems so crazy. How can I trust data coming out of CEA if you have a policy of retracting data that doesn’t align with the story you want to tell about CEA and EA? The whole point of sharing raw data is to allow other people to come to their own conclusions. This really seems like such a dumb move from a trust perspective.
I also believe that the data making EA+CEA looks bad is the causal reason why it was taken down. However, I want to add some slight nuance.
I want to contrast a model whereby Angelina Li did this while explicitly trying to stop CEA from looking bad, versus a model whereby she senses that something bad might be happening, she might be held responsible (e.g. within her organization / community), and is executing a move that she’s learned is ‘responsible’ from the culture around her.
I think many people have learned to believe the reasoning step “If people believe bad things about my team I think are mistaken with the information I’ve given them, then I am responsible for not misinforming people, so I should take the information away, because it is irresponsible to cause people to have false beliefs”. I think many well-intentioned people will say something like this, and that this is probably because of two reasons (borrowing from The Gervais Principle):
This is a useful argument for powerful sociopaths to use when they are trying to suppress negative information about themselves.
The clueless people below them in the hierarchy need to rationalize why they are following the orders of the sociopaths to prevent people from accessing information. The idea that they are ‘acting responsibly’ is much more palatable than the idea that they are trying to control people, so they willingly spread it and act in accordance with it.
A broader model I have is that there are many such inference-steps floating around the culture that well-intentioned people can accept as received wisdom, and they got there because sociopaths needed a cover for their bad behavior and the clueless people wanted reasons to feel good about their behavior; and that each of these adversarially optimized inference-steps need to be fought and destroyed.
I agree, and I am a bit disturbed that it needs to be said.
At normal, non-EA organizations—and not only particularly villainous ones, either! -- it is understood that you need to avoid sharing any information that reflects poorly on the organization, unless it’s required by law or contract or something. The purpose of public-facing communications is to burnish the org’s reputation. This is so obvious that they do not actually spell it out to employees.
Of COURSE any organization that has recently taken down unflattering information is doing it to maintain its reputation.
I’m sorry, but this is how “our people” get taken for a ride. Be more cynical, including about people you like.
I think many people have learned to believe the reasoning step “If people believe bad things about my team I think are mistaken with the information I’ve given them, then I am responsible for not misinforming people, so I should take the information away, because it is irresponsible to cause people to have false beliefs”. I think many well-intentioned people will say something like this, and that this is probably because of two reasons (borrowing from The Gervais Principle):
(Comment not specific to the particulars of this issue but noted as a general policy:) I think that as a general rule, if you are hypothesizing reasons for why somebody might say a thing, you should always also include the hypothesis that “people say a thing because they actually believe in it”. This is especially so if you are hypothesizing bad reasons for why people might say it.
It’s very annoying when someone hypothesizes various psychological reasons for your behavior and beliefs but never even considers as a possibility the idea that maybe you might have good reasons to believe in it. Compare e.g. “rationalists seem to believe that superintelligence is imminent; I think this is probably because that lets them avoid taking responsibility about their current problems if AI will make those irrelevant anyway, or possibly because they come from religious backgrounds and can’t get over their subconscious longing for a god-like figure”.
I feel more responsibility to be the person holding/tracking the earnest hypothesis in a 1-1 context, or if I am the only one speaking; in larger group contexts I tend to mostly ask “Is there a hypothesis here that isn’t or likely won’t be tracked unless I speak up” and then I mostly focus on adding hypotheses to track (or adding evidence that nobody else is adding).
(Did Ben indicate he didn’t consider it? My guess is he considered it, but thinks it’s not that likely and doesn’t have amazingly interesting things to say on it.
I think having a norm of explicitly saying “I considered whether you were saying the truth but I don’t believe it” seems like an OK norm, but not obviously a great one. In this case Ben also responded to a comment of mine which already said this, and so I really don’t see a reason for repeating it.)
I gave my strongest hypothesis for why it looks to me that many many people believe it’s responsible to take down information that makes your org look bad. I don’t think alternative stories have negligible probability, nor does what I wrote imply that, though it is logically consistent with that.
There are many anti-informative behaviors that are widespread for which people do for poor reasons, like saying that their spouse is the best spouse in the world, or telling customers that their business is the best business in the industry, or saying exclusively glowing things about people in reference letters, that are best explained by the incentives on the person to present themselves in the best light; at the same time, it is respectful to a person, while in dialogue with them, to keep a track of the version of them who is trying their best to have true beliefs and honestly inform others around them, in order to help them become that person (and notice the delta between their current behavior and what they hopefully aspire to).
Seeing orgs in the self-identified-EA space take down information that makes them look bad is (to me) not that dissimilar to the other things I listed.
I think it’s good to discuss norms about how appropriate it is to bring up cynical hypotheses about someone during a discussion in which they’re present. In this case I think raising this hypothesis was worthwhile it for the discussion, and I didn’t cut off any way for the person in question to continue to show themselves to be broadly acting in good faith, so I think it went fine. Li replied to Habryka, and left a thoughtful pair of comments retracting and apologizing, which reflected well on them in my eyes.
I don’t think alternative stories have negligible probability
Okay! Good clarification.
I think it’s good to discuss norms about how appropriate it is to bring up cynical hypotheses about someone during a discussion in which they’re present.
To clarify, my comment wasn’t specific to the case where the person is present. There are obvious reasons why the consideration should get extra weight when the person is present, but there’s also a reason to give it extra weight if none of the people discussed are present—namely that they won’t be able to correct any incorrect claims if they’re not around.
so I think it went fine
Agree.
(As I mentioned in the original comment, the point I made was not specific to the details of this case, but noted as a general policy. But yes, in this specific case it went fine.)
“The data was accurate as far as I can tell until August 2024”
I’ve heard a few reports over the last few weeks that made me unsure whether the pre-Aug data was actually correct. I haven’t had time to dig into this.
In one case (e.g. with the EA.org data) we have a known problem with the historical data that I haven’t had time to fix, that probably means the reported downward trend in views is misleading. Again I haven’t had time to scope the magnitude of this etc.
I’m going to check internally to see if we can just get this back up in a week or two (It was already high on our stack, so this just nudges up timelines a bit). I will update this thread once I have a plan to share.
I’m probably going to drop responding to “was this a bad call” and prioritize “just get the dashboard back up soon”.
More thoughts here, but TL;DR I’ve decided to revert the dashboard back to its original state & have republished the stale data. (Just flagging for readers who wanted to dig into the metrics.)
Hey! I just saw your edited text and wanted to jot down a response:
Edit: I’ll be honest, after thinking about it for longer, the only reason I can think of why you would take down the data is because it makes CEA and EA look less on an upwards trajectory. But this seems so crazy. How can I trust data coming out of CEA if you have a policy of retracting data that doesn’t align with the story you want to tell about CEA and EA? The whole point of sharing raw data is to allow other people to come to their own conclusions. This really seems like such a dumb move from a trust perspective.
I’m sorry this feels bad to you. I care about being truth seeking and care about the empirical question of “what’s happening with EA growth?”. Part of my motivation in getting this dashboard published in the first place was to contribute to the epistemic commons on this question.
I also disagree that CEA retracts data that doesn’t align with “the right story on growth”. E.g. here’s a post I wrote in mid 2023 where the bottom line conclusion was that growth in meta EA projects was down in 2023 v 2022. It also publishes data on several cases where CEA programs grew slower in 2023 or shrank. TBH I also think of this as CEA contributing to the epistemic commons here — it took us a long time to coordinate and then get permission from people to publish this. And I’m glad we did it!
On the specific call here, I’m not really sure what else to tell you re: my motivations other than what I’ve already said. I’m going to commit to not responding further to protect my attention, but I thought I’d respond at least once :)
I would currently be quite surprised if you had taken the same action if I was instead making an inference that positively reflects on CEA or EA. I might of course be wrong, but you did do it right after I wrote something critical of EA and CEA, and did not do it the many other times it was linked in the past year. Sadly your institution has a long history of being pretty shady with data and public comms this way, and so my priors are not very positively inclined.
I continue to think that it would make sense to at least leave the data up that CEA did feel comfortable linking in the last 1.5 years. By my norms invalidating links like this, especially if the underlying page happens to be unscrapeable by the internet archive, is really very bad form.
Hi! A quick note: I created the CEA Dashboard which is the 2nd link you reference. The data here hadn’t been updated since August 2024, and so was quite out of date at the time of your comment. I’ve now taken this dashboard down, since I think it’s overall more confusing than helpful for grokking the state of CEA’s work. We still intend to come back and update it within a few months.
Just to be clear on why / what’s going on:
I stopped updating the dashboard in August because I started getting busy with some other projects, and my manager & I decided to deprioritize this. (There are some manual steps needed to keep the data live).
I’ve now seen several people refer to that dashboard as a reference for how CEA is doing in ways I think are pretty misleading.
We (CEA) still intend to come back and fix this, and this is a good nudge to prioritize it.
Thanks!
Oh, huh, that seems very sad. Why would you do that? Please leave up the data that we have. I think it’s generally bad form to break links that people relied on. The data was accurate as far as I can tell until August 2024, and you linked to it yourself a bunch over the years, don’t just break all of those links.
I am pretty up-to-date with other EA metrics and I don’t really see how this would be misleading. You had a disclaimer at the top that I think gave all the relevant context. Let people make their own inferences, or add more context, but please don’t just take things down.
Unfortunately, archive.org doesn’t seem to have worked for that URL, so we can’t even rely on that to show the relevant data trends.
Edit: I’ll be honest, after thinking about it for longer, the only reason I can think of why you would take down the data is because it makes CEA and EA look less on an upwards trajectory. But this seems so crazy. How can I trust data coming out of CEA if you have a policy of retracting data that doesn’t align with the story you want to tell about CEA and EA? The whole point of sharing raw data is to allow other people to come to their own conclusions. This really seems like such a dumb move from a trust perspective.
I also believe that the data making EA+CEA looks bad is the causal reason why it was taken down. However, I want to add some slight nuance.
I want to contrast a model whereby Angelina Li did this while explicitly trying to stop CEA from looking bad, versus a model whereby she senses that something bad might be happening, she might be held responsible (e.g. within her organization / community), and is executing a move that she’s learned is ‘responsible’ from the culture around her.
I think many people have learned to believe the reasoning step “If people believe bad things about my team I think are mistaken with the information I’ve given them, then I am responsible for not misinforming people, so I should take the information away, because it is irresponsible to cause people to have false beliefs”. I think many well-intentioned people will say something like this, and that this is probably because of two reasons (borrowing from The Gervais Principle):
This is a useful argument for powerful sociopaths to use when they are trying to suppress negative information about themselves.
The clueless people below them in the hierarchy need to rationalize why they are following the orders of the sociopaths to prevent people from accessing information. The idea that they are ‘acting responsibly’ is much more palatable than the idea that they are trying to control people, so they willingly spread it and act in accordance with it.
A broader model I have is that there are many such inference-steps floating around the culture that well-intentioned people can accept as received wisdom, and they got there because sociopaths needed a cover for their bad behavior and the clueless people wanted reasons to feel good about their behavior; and that each of these adversarially optimized inference-steps need to be fought and destroyed.
I agree, and I am a bit disturbed that it needs to be said.
At normal, non-EA organizations—and not only particularly villainous ones, either! -- it is understood that you need to avoid sharing any information that reflects poorly on the organization, unless it’s required by law or contract or something. The purpose of public-facing communications is to burnish the org’s reputation. This is so obvious that they do not actually spell it out to employees.
Of COURSE any organization that has recently taken down unflattering information is doing it to maintain its reputation.
I’m sorry, but this is how “our people” get taken for a ride. Be more cynical, including about people you like.
(Comment not specific to the particulars of this issue but noted as a general policy:) I think that as a general rule, if you are hypothesizing reasons for why somebody might say a thing, you should always also include the hypothesis that “people say a thing because they actually believe in it”. This is especially so if you are hypothesizing bad reasons for why people might say it.
It’s very annoying when someone hypothesizes various psychological reasons for your behavior and beliefs but never even considers as a possibility the idea that maybe you might have good reasons to believe in it. Compare e.g. “rationalists seem to believe that superintelligence is imminent; I think this is probably because that lets them avoid taking responsibility about their current problems if AI will make those irrelevant anyway, or possibly because they come from religious backgrounds and can’t get over their subconscious longing for a god-like figure”.
I feel more responsibility to be the person holding/tracking the earnest hypothesis in a 1-1 context, or if I am the only one speaking; in larger group contexts I tend to mostly ask “Is there a hypothesis here that isn’t or likely won’t be tracked unless I speak up” and then I mostly focus on adding hypotheses to track (or adding evidence that nobody else is adding).
(Did Ben indicate he didn’t consider it? My guess is he considered it, but thinks it’s not that likely and doesn’t have amazingly interesting things to say on it.
I think having a norm of explicitly saying “I considered whether you were saying the truth but I don’t believe it” seems like an OK norm, but not obviously a great one. In this case Ben also responded to a comment of mine which already said this, and so I really don’t see a reason for repeating it.)
(I read
as implying that the list of reasons is considered to exhaustive, such that any reasons besides those two have negligible probability.)
I gave my strongest hypothesis for why it looks to me that many many people believe it’s responsible to take down information that makes your org look bad. I don’t think alternative stories have negligible probability, nor does what I wrote imply that, though it is logically consistent with that.
There are many anti-informative behaviors that are widespread for which people do for poor reasons, like saying that their spouse is the best spouse in the world, or telling customers that their business is the best business in the industry, or saying exclusively glowing things about people in reference letters, that are best explained by the incentives on the person to present themselves in the best light; at the same time, it is respectful to a person, while in dialogue with them, to keep a track of the version of them who is trying their best to have true beliefs and honestly inform others around them, in order to help them become that person (and notice the delta between their current behavior and what they hopefully aspire to).
Seeing orgs in the self-identified-EA space take down information that makes them look bad is (to me) not that dissimilar to the other things I listed.
I think it’s good to discuss norms about how appropriate it is to bring up cynical hypotheses about someone during a discussion in which they’re present. In this case I think raising this hypothesis was worthwhile it for the discussion, and I didn’t cut off any way for the person in question to continue to show themselves to be broadly acting in good faith, so I think it went fine. Li replied to Habryka, and left a thoughtful pair of comments retracting and apologizing, which reflected well on them in my eyes.
Okay! Good clarification.
To clarify, my comment wasn’t specific to the case where the person is present. There are obvious reasons why the consideration should get extra weight when the person is present, but there’s also a reason to give it extra weight if none of the people discussed are present—namely that they won’t be able to correct any incorrect claims if they’re not around.
Agree.
(As I mentioned in the original comment, the point I made was not specific to the details of this case, but noted as a general policy. But yes, in this specific case it went fine.)
Quick thoughts on this:
“The data was accurate as far as I can tell until August 2024”
I’ve heard a few reports over the last few weeks that made me unsure whether the pre-Aug data was actually correct. I haven’t had time to dig into this.
In one case (e.g. with the EA.org data) we have a known problem with the historical data that I haven’t had time to fix, that probably means the reported downward trend in views is misleading. Again I haven’t had time to scope the magnitude of this etc.
I’m going to check internally to see if we can just get this back up in a week or two (It was already high on our stack, so this just nudges up timelines a bit). I will update this thread once I have a plan to share.
I’m probably going to drop responding to “was this a bad call” and prioritize “just get the dashboard back up soon”.
More thoughts here, but TL;DR I’ve decided to revert the dashboard back to its original state & have republished the stale data. (Just flagging for readers who wanted to dig into the metrics.)
Hey! I just saw your edited text and wanted to jot down a response:
I’m sorry this feels bad to you. I care about being truth seeking and care about the empirical question of “what’s happening with EA growth?”. Part of my motivation in getting this dashboard published in the first place was to contribute to the epistemic commons on this question.
I also disagree that CEA retracts data that doesn’t align with “the right story on growth”. E.g. here’s a post I wrote in mid 2023 where the bottom line conclusion was that growth in meta EA projects was down in 2023 v 2022. It also publishes data on several cases where CEA programs grew slower in 2023 or shrank. TBH I also think of this as CEA contributing to the epistemic commons here — it took us a long time to coordinate and then get permission from people to publish this. And I’m glad we did it!
On the specific call here, I’m not really sure what else to tell you re: my motivations other than what I’ve already said. I’m going to commit to not responding further to protect my attention, but I thought I’d respond at least once :)
I would currently be quite surprised if you had taken the same action if I was instead making an inference that positively reflects on CEA or EA. I might of course be wrong, but you did do it right after I wrote something critical of EA and CEA, and did not do it the many other times it was linked in the past year. Sadly your institution has a long history of being pretty shady with data and public comms this way, and so my priors are not very positively inclined.
I continue to think that it would make sense to at least leave the data up that CEA did feel comfortable linking in the last 1.5 years. By my norms invalidating links like this, especially if the underlying page happens to be unscrapeable by the internet archive, is really very bad form.
I did really appreciate your mid 2023 post!