Of course the word “might” is doing a lot of work here! Because there is no guaranteed happy solution, the best we can do is steer away from futures we absolutely know we we do not want to be in, like a grinding totalitarianism rationalized by “We’re saving you from the looming threat of killer AIs!”
″ At least with the current system, corporations are able to test models before release”. The history of proprietary software does not inspire any confidence at all that this will be done adequately, or even at all; in a fight between time-to-market and software quality, getting their firstest almost always wins. It’s not reasonable to expect this to change simply because some people have strong opinions about AI risk.
OpenAI seems to have held off on the deployment of GPT4 for a number of months. They also brought on ARC evals and a bunch of experts to help evaluate the risks of releasing the model.
Of course the word “might” is doing a lot of work here! Because there is no guaranteed happy solution, the best we can do is steer away from futures we absolutely know we we do not want to be in, like a grinding totalitarianism rationalized by “We’re saving you from the looming threat of killer AIs!”
″ At least with the current system, corporations are able to test models before release”. The history of proprietary software does not inspire any confidence at all that this will be done adequately, or even at all; in a fight between time-to-market and software quality, getting their firstest almost always wins. It’s not reasonable to expect this to change simply because some people have strong opinions about AI risk.
OpenAI seems to have held off on the deployment of GPT4 for a number of months. They also brought on ARC evals and a bunch of experts to help evaluate the risks of releasing the model.