The main reason I didn’t understand (despite some things being listed) is I assumed none of that was happening at Lightcone (because I guessed you would filter out EAs with bad takes in favor of rationalists for example). The fact that some people in EA (a huge broad community) are probably wrong about some things didn’t seem to be an argument that Lightcone Offices would be ineffective as (AFAIK) you could filter people at your discretion.
More specifically, I had no idea “a huge component of the Lightcone Offices was causing people to work at those organizations”. That’s strikingly more of a debatable move but I’m curious why that happened in the first place ? In my field building in France we talk of x-risk and alignment and people don’t want to accelerate the labs but do want to slow down or do alignment work. I feel a bit preachy here but internally it just feels like the obvious move is “stop doing the probably bad thing”, but I do understand if you got in this situation unexpectedly that you’ll have a better chance burning this place down and creating a fresh one with better norms.
Overall I get a weird feeling of “the people doing bad stuff are being protected again, we should name more precisely who’s doing the bad stuff and why we think it’s bad” (because I feel aimed at by vague descriptions like field-building, even though I certainly don’t feel like I contributed to any of the bad stuff being pointed at)
No, this does not characterize my opinion very well. I don’t think “worrying about downside risk” is a good pointer to what I think will help, and I wouldn’t characterize the problem that people have spent too little effort or too little time on worrying about downside risk. I think people do care about downside risk, I just also think there are consistent and predictable biases that cause people to be unable to stop, or be unable to properly notice certain types of downside risk, though that statement feels in my mind kind of vacuous and like it just doesn’t capture the vast majority of the interesting detail of my model.
So it’s not a problem of not caring, but of not succeeding at the task. I assume the kind of errors you’re pointing at are things which should happen less with more practiced rationalists ? I guess then we can either filter to only have people who are already pretty good rationalists, or train them (I don’t know if there are good results on that side per CFAR).
The fact that some people in EA (a huge broad community) are probably wrong about some things didn’t seem to be an argument that Lightcone Offices would be ineffective as (AFAIK) you could filter people at your discretion.
I mean, no, we were specifically trying to support theEA community, we do not get to unilaterally decide who is part of the community. People I don’t personally have much respect for but are members of the EA community who are putting in the work to be considered members in good standing definitely get to pass through. I’m not going as far as to say this was the only thing going on, I made choices about which parts of the movement seemed like they were producing good work and acting ethically and which parts seemed pretty horrendous and to be avoided, but I would (for instance) regularly make an attempt to welcome people from an area that seemed to have poor connections in the social graph (e.g. the first EA from country X, from org Y, from area-of-work Z etc), even if I wasn’t excited about that person or place or area, because it was part of the EA community and it seems very valuable for the community as a whole to have better interconnectedness between the disparate parts. Overall I think the question I asked was closer to “what would a good custodian of the EA community want to use these resources for” rather than “what would Ben or Lightcone want to use these resources for”.
As to your confusion about the office, an analogy that might help here is to consider the marketing or recruitment part of a large company, or perhaps a branch of the company that makes a different product from the rest — yes, our part of the organization functioned nicely, and I liked the choices we made, but if some other part of the company is screwing over its customers/staff, or the CEO is stealing money, or the company’s product seems unethical to me, it doesn’t matter if I like my part of the company, I am contributing to the company’s life and output and should act accordingly. I did not work at FTX, I have not worked for OpenAI, but I am heavily supporting an ecosystem that supported these companies, and I anticipate that the resources I contribute will continue to get captured by these sorts of players via some circuitous route.
Thanks for the reply !
The main reason I didn’t understand (despite some things being listed) is I assumed none of that was happening at Lightcone (because I guessed you would filter out EAs with bad takes in favor of rationalists for example). The fact that some people in EA (a huge broad community) are probably wrong about some things didn’t seem to be an argument that Lightcone Offices would be ineffective as (AFAIK) you could filter people at your discretion.
More specifically, I had no idea “a huge component of the Lightcone Offices was causing people to work at those organizations”. That’s strikingly more of a debatable move but I’m curious why that happened in the first place ? In my field building in France we talk of x-risk and alignment and people don’t want to accelerate the labs but do want to slow down or do alignment work. I feel a bit preachy here but internally it just feels like the obvious move is “stop doing the probably bad thing”, but I do understand if you got in this situation unexpectedly that you’ll have a better chance burning this place down and creating a fresh one with better norms.
Overall I get a weird feeling of “the people doing bad stuff are being protected again, we should name more precisely who’s doing the bad stuff and why we think it’s bad” (because I feel aimed at by vague descriptions like field-building, even though I certainly don’t feel like I contributed to any of the bad stuff being pointed at)
So it’s not a problem of not caring, but of not succeeding at the task. I assume the kind of errors you’re pointing at are things which should happen less with more practiced rationalists ? I guess then we can either filter to only have people who are already pretty good rationalists, or train them (I don’t know if there are good results on that side per CFAR).
I mean, no, we were specifically trying to support the EA community, we do not get to unilaterally decide who is part of the community. People I don’t personally have much respect for but are members of the EA community who are putting in the work to be considered members in good standing definitely get to pass through. I’m not going as far as to say this was the only thing going on, I made choices about which parts of the movement seemed like they were producing good work and acting ethically and which parts seemed pretty horrendous and to be avoided, but I would (for instance) regularly make an attempt to welcome people from an area that seemed to have poor connections in the social graph (e.g. the first EA from country X, from org Y, from area-of-work Z etc), even if I wasn’t excited about that person or place or area, because it was part of the EA community and it seems very valuable for the community as a whole to have better interconnectedness between the disparate parts. Overall I think the question I asked was closer to “what would a good custodian of the EA community want to use these resources for” rather than “what would Ben or Lightcone want to use these resources for”.
As to your confusion about the office, an analogy that might help here is to consider the marketing or recruitment part of a large company, or perhaps a branch of the company that makes a different product from the rest — yes, our part of the organization functioned nicely, and I liked the choices we made, but if some other part of the company is screwing over its customers/staff, or the CEO is stealing money, or the company’s product seems unethical to me, it doesn’t matter if I like my part of the company, I am contributing to the company’s life and output and should act accordingly. I did not work at FTX, I have not worked for OpenAI, but I am heavily supporting an ecosystem that supported these companies, and I anticipate that the resources I contribute will continue to get captured by these sorts of players via some circuitous route.