One thing to watch for would be top-level AI talent getting snapped up by governments rather than companies interested in making better spam detectors/photo-sharing apps.
What makes you think the government can’t pay for secret work to be done at Google by Google researchers, or isn’t already doing so, (and respectively the Chinese government with Baidu), which would be easier / cheaper than hiring them all away and forcing them to work for lower pay at some secret lab in the middle of nowhere?
It might! the fewer people who are plausibly competing in arms race the more chance of negotiating a settlement or simply maintaining a peaceful standoff out of caution. If OpenAI enables more entities to have a solid chance of creating a fooming AI in secret, that’s a much more urgent development than if China and the US are the only real threat to each other, and both know it.
For one, a lot of the Baidu AI work happens in their silicon valley lab, which would certainly not be the case if it was a secret government project. But your general point stands.
So, um, you think that the arms race is likely to be between DeepMind and OpenAI?
And not between a highly secret organization funded by the US government and another similar organization funded by the Chinese government?
One thing to watch for would be top-level AI talent getting snapped up by governments rather than companies interested in making better spam detectors/photo-sharing apps.
What makes you think the government can’t pay for secret work to be done at Google by Google researchers, or isn’t already doing so, (and respectively the Chinese government with Baidu), which would be easier / cheaper than hiring them all away and forcing them to work for lower pay at some secret lab in the middle of nowhere?
The point is that eliminating OpenAI (or merging them with DeepMind) will not lessen the arms-race-to-Skynet issue.
It might! the fewer people who are plausibly competing in arms race the more chance of negotiating a settlement or simply maintaining a peaceful standoff out of caution. If OpenAI enables more entities to have a solid chance of creating a fooming AI in secret, that’s a much more urgent development than if China and the US are the only real threat to each other, and both know it.
Shall we revisit the difference between what’s possible and what’s likely?
For one, a lot of the Baidu AI work happens in their silicon valley lab, which would certainly not be the case if it was a secret government project. But your general point stands.
That’s only the work you know about, though! Who’s to say that they aren’t also involved in some sort of secret government projects?