To give credit where it’s due, I’m impressed that someone could ask the question whether EA and Rationality were net negative from our values, and while I suspect that an honest investigation would say it wasn’t net negative, as Scott Garrabrant said, Yes requires the possibility of No, and there’s an outside chance of an investigation returning that EA/Rationality is net negative.
Also, I definitely agree that we probably should talk about things that are outside the Overton Window more.
Re Sam Altman’s tweet, I actually think this is reasonably neutral, from my vantage point, maybe because I’m way more optimistic on AI risk and AI Alignment than most of LW.
To give credit where it’s due, I’m impressed that someone could ask the question whether EA and Rationality were net negative from our values, and while I suspect that an honest investigation would say it wasn’t net negative, as Scott Garrabrant said, Yes requires the possibility of No, and there’s an outside chance of an investigation returning that EA/Rationality is net negative.
Also, I definitely agree that we probably should talk about things that are outside the Overton Window more.
Re Sam Altman’s tweet, I actually think this is reasonably neutral, from my vantage point, maybe because I’m way more optimistic on AI risk and AI Alignment than most of LW.