I would reemphasize that the “does OpenAI increase risks” is a counterfactual question. That means we need to be clearer about what we are asking as a matter of predicting what the counterfactuals are, and consider strategy options for going forward. This is a major set of questions, and increasing or decreasing risks as a single metric isn’t enough to capture much of interest.
For a taste of what we’d want to consider, what about the following:
Are we asking OpenAI to pick a different, “safer” strategy?
Perhaps they should focus more on hiring people to work on safety and strategy, and hire fewer capabilities researchers. That brings us to the Dr. Wily/Dr. Light question—Perhaps Dr. Capabilities B. Wily shouldn’t be hired, and Dr. Safety R. Light should be, instead. That means Wily does capabilities research elsewhere, perhaps with more resources, and Light does safety research at OpenAI. But the counterfactual is that Light would do (perhaps slightly less well funded) research on safety anyways, and Wily would work on (approximately as useful) capabilities research at OpenAI—advantaging OpenAI in any capabilities races in the future.
Are we asking OpenAI to be larger, and (if needed,) we should find them funding?
Perhaps the should hire both, along with all of Dr. Light and Dr. Wily’s research teams. Fast growth will dilute OpenAI’s culture, but give them an additional marginal advantage over other groups. Perhaps bringing them in would help OpenAI in race dynamics, but make it more likely that they’d engage in such races.
How much funding would this need? Perhaps none—they have cash, they just need to do this. Or perhaps tons, and we need them to be profitable, and focus on that strategy, with all of the implications of that. Or perhaps a moderate amount, and we just need OpenPhil to give them another billion dollars, and then we need to ask about the counterfactual impact of that money.
Or OpenAI should focus on redirecting their capabilities staff to work on safety, and have a harder time hiring the best people who want to work on capabilities? Or OpenAI should be smaller and more focused, and reserve cash?
These are all important questions, but need much more time than I, or I suspect, most of the readers here have available—and are probably already being discussed more usefully by both OpenAI, and their advisors.
I would reemphasize that the “does OpenAI increase risks” is a counterfactual question. That means we need to be clearer about what we are asking as a matter of predicting what the counterfactuals are, and consider strategy options for going forward. This is a major set of questions, and increasing or decreasing risks as a single metric isn’t enough to capture much of interest.
For a taste of what we’d want to consider, what about the following:
Are we asking OpenAI to pick a different, “safer” strategy?
Perhaps they should focus more on hiring people to work on safety and strategy, and hire fewer capabilities researchers. That brings us to the Dr. Wily/Dr. Light question—Perhaps Dr. Capabilities B. Wily shouldn’t be hired, and Dr. Safety R. Light should be, instead. That means Wily does capabilities research elsewhere, perhaps with more resources, and Light does safety research at OpenAI. But the counterfactual is that Light would do (perhaps slightly less well funded) research on safety anyways, and Wily would work on (approximately as useful) capabilities research at OpenAI—advantaging OpenAI in any capabilities races in the future.
Are we asking OpenAI to be larger, and (if needed,) we should find them funding?
Perhaps the should hire both, along with all of Dr. Light and Dr. Wily’s research teams. Fast growth will dilute OpenAI’s culture, but give them an additional marginal advantage over other groups. Perhaps bringing them in would help OpenAI in race dynamics, but make it more likely that they’d engage in such races.
How much funding would this need? Perhaps none—they have cash, they just need to do this. Or perhaps tons, and we need them to be profitable, and focus on that strategy, with all of the implications of that. Or perhaps a moderate amount, and we just need OpenPhil to give them another billion dollars, and then we need to ask about the counterfactual impact of that money.
Or OpenAI should focus on redirecting their capabilities staff to work on safety, and have a harder time hiring the best people who want to work on capabilities? Or OpenAI should be smaller and more focused, and reserve cash?
These are all important questions, but need much more time than I, or I suspect, most of the readers here have available—and are probably already being discussed more usefully by both OpenAI, and their advisors.
Also apparently Megaman is less popular than I thought so I added links to the names.
Oh. Right. I should have gotten the reference, but wasn’t thinking about it.
Fwiw I recently listened to the excellent song ‘The Good Doctor’ which has me quite delighted to get random megaman references.
Just so you know, I got the reference. ;)