That all makes sense. It does feel like this is worth a larger conversation now that people are thinking about it, and I don’t think you guys are the only ones.
To give credit where it’s due, I’m impressed that someone could ask the question whether EA and Rationality were net negative from our values, and while I suspect that an honest investigation would say it wasn’t net negative, as Scott Garrabrant said, Yes requires the possibility of No, and there’s an outside chance of an investigation returning that EA/Rationality is net negative.
Also, I definitely agree that we probably should talk about things that are outside the Overton Window more.
Re Sam Altman’s tweet, I actually think this is reasonably neutral, from my vantage point, maybe because I’m way more optimistic on AI risk and AI Alignment than most of LW.
That all makes sense. It does feel like this is worth a larger conversation now that people are thinking about it, and I don’t think you guys are the only ones.
I’m reminded of this Sam Altman tweet: https://mobile.twitter.com/sama/status/1621621724507938816
To give credit where it’s due, I’m impressed that someone could ask the question whether EA and Rationality were net negative from our values, and while I suspect that an honest investigation would say it wasn’t net negative, as Scott Garrabrant said, Yes requires the possibility of No, and there’s an outside chance of an investigation returning that EA/Rationality is net negative.
Also, I definitely agree that we probably should talk about things that are outside the Overton Window more.
Re Sam Altman’s tweet, I actually think this is reasonably neutral, from my vantage point, maybe because I’m way more optimistic on AI risk and AI Alignment than most of LW.