But one thing that mars it for me to some extent is the discussion of OpenAI.
To me, the criticism of OpenAI feels like it’s intended as a tribal signifier, like “hey look, I am of the tribe that is against OpenAI”.
Now maybe that’s unfair and you had no intention of anything like that and my vibe detection is off, but if I get that impression, I think it’s reasonably likely that OpenAI decisionmakers would get the same impression, and I presume that’s exactly what you don’t want based on the rest of the post.
And even leaving aside practical considerations, I don’t think OpenAI warrants being treated as the leading example of rationality failure.
First, I am not convinced that the alternative to OpenAI existing is the absence of a capabilities race. I think, in contrast, that a capabilities race was inevitable and that the fact that the leading AI lab has as decent a plan as it does is potentially a major win by the rationality community.
Also, while OpenAI’s plans so far look inadequate, to me they look considerably more sane than MIRI’s proposal to attempt a pivotal act with non-human-values-aligned AI. There’s also potential for OpenAI’s plans to be improved as more knowledge on mitigating AI risk is obtained, which is helped by their relatively serious attitude as compared to, for example, Google after their recent reorganization. Meta.
And while OpenAI is creating a race dynamic by getting ahead, IMO MIRI’s pivotal act plan would be creating a far worse race dynamic if they were showing signs of being able to pull it off anytime soon.
I know many others don’t disagree, but I think that there is enough of a case for OpenAI being less bad than potential alternatives to feel using it as if it were an uncontroversial bad thing detracts from the post.
I single it out because Yudkowsky singled it out and seems to see it as a major negative consequence to the goals he was trying to achieve with the community.
This post has a lot of great points.
But one thing that mars it for me to some extent is the discussion of OpenAI.
To me, the criticism of OpenAI feels like it’s intended as a tribal signifier, like “hey look, I am of the tribe that is against OpenAI”.
Now maybe that’s unfair and you had no intention of anything like that and my vibe detection is off, but if I get that impression, I think it’s reasonably likely that OpenAI decisionmakers would get the same impression, and I presume that’s exactly what you don’t want based on the rest of the post.
And even leaving aside practical considerations, I don’t think OpenAI warrants being treated as the leading example of rationality failure.
First, I am not convinced that the alternative to OpenAI existing is the absence of a capabilities race. I think, in contrast, that a capabilities race was inevitable and that the fact that the leading AI lab has as decent a plan as it does is potentially a major win by the rationality community.
Also, while OpenAI’s plans so far look inadequate, to me they look considerably more sane than MIRI’s proposal to attempt a pivotal act with non-human-values-aligned AI. There’s also potential for OpenAI’s plans to be improved as more knowledge on mitigating AI risk is obtained, which is helped by their relatively serious attitude as compared to, for example,
Google after their recent reorganization.Meta.And while OpenAI is creating a race dynamic by getting ahead, IMO MIRI’s pivotal act plan would be creating a far worse race dynamic if they were showing signs of being able to pull it off anytime soon.
I know many others don’t disagree, but I think that there is enough of a case for OpenAI being less bad than potential alternatives to feel using it as if it were an uncontroversial bad thing detracts from the post.
I single it out because Yudkowsky singled it out and seems to see it as a major negative consequence to the goals he was trying to achieve with the community.