People disagree (individual people over time as OpenAI’s policies have changed, and different people within the EAsphere), over whether OpenAI is net positive or harmful. So, if you’re confused about “isn’t this… just bad?” well, know that you’re not alone in that outlook.
Arguments that OpenAI (and Deepmind) are pursuing reasonable strategies include something like:
Most AI researchers are excited about AI research and are going to keep doing it somewhere, and if OpenAI or Deepmind switched to a “just focus on safety” plan, many of their employees would leave and go somewhere else where they can work on the things that excite them. Keeping top researchers concentrated in places that have at least a plausible goal of “build safe things” at least makes it easier to coordinate on safety than if they’re scattered across different orgs with zero safety focus.
Whether this is net-good-or-bad depends on one’s models of the interior of the organizations and how humans work and how coordination works.
People disagree (individual people over time as OpenAI’s policies have changed, and different people within the EAsphere), over whether OpenAI is net positive or harmful. So, if you’re confused about “isn’t this… just bad?” well, know that you’re not alone in that outlook.
Arguments that OpenAI (and Deepmind) are pursuing reasonable strategies include something like:
Most AI researchers are excited about AI research and are going to keep doing it somewhere, and if OpenAI or Deepmind switched to a “just focus on safety” plan, many of their employees would leave and go somewhere else where they can work on the things that excite them. Keeping top researchers concentrated in places that have at least a plausible goal of “build safe things” at least makes it easier to coordinate on safety than if they’re scattered across different orgs with zero safety focus.
Whether this is net-good-or-bad depends on one’s models of the interior of the organizations and how humans work and how coordination works.