I agree with Eliezer that this would most likely be “suicide”. Open-sourcing code would mean that the bad actors gain access to powerful AI systems immediately upon development (or sooner if they decide to front-run). At least with the current system, corporations are able to test models before release, determine their capabilities and prepare society for what’s coming up further ahead. It also provides the option to not release if we decide that the risks are too great.
I agree with Vladimir’s point that whilst you say everyone supporting the moratorium has “unaccountably lost their ability to do elementary game theory”, you seem to have not applied this lens yourself. I’d suggest asking yourself why you weren’t able to see this. In my experience, this has often been the case when I have a strong pre-existing belief and this makes it hard to see any potential flaws with this perspective unless I really make myself look.
”Moratorium won’t work. Monopoly won’t either. Freedom and transparency might.”—the word “might” is doing a lot of work here. You’ve vaguely gestured in a particular direction, but not really filled in the details. I think if you attempted to do that, you’d see that it’s hard to fill in the concrete details so that they work.
Lastly, this misses what I see as the crux of the issue, which is the offense-defense balance. I think advanced AI systems will heavily favour the attacker given that you only need, for example, one security flaw to completely compromise your opponent’s system. If this is the case, then everyone being at around about the same capability level won’t really help.
Of course the word “might” is doing a lot of work here! Because there is no guaranteed happy solution, the best we can do is steer away from futures we absolutely know we we do not want to be in, like a grinding totalitarianism rationalized by “We’re saving you from the looming threat of killer AIs!”
″ At least with the current system, corporations are able to test models before release”. The history of proprietary software does not inspire any confidence at all that this will be done adequately, or even at all; in a fight between time-to-market and software quality, getting their firstest almost always wins. It’s not reasonable to expect this to change simply because some people have strong opinions about AI risk.
OpenAI seems to have held off on the deployment of GPT4 for a number of months. They also brought on ARC evals and a bunch of experts to help evaluate the risks of releasing the model.
I agree with Eliezer that this would most likely be “suicide”. Open-sourcing code would mean that the bad actors gain access to powerful AI systems immediately upon development (or sooner if they decide to front-run). At least with the current system, corporations are able to test models before release, determine their capabilities and prepare society for what’s coming up further ahead. It also provides the option to not release if we decide that the risks are too great.
I agree with Vladimir’s point that whilst you say everyone supporting the moratorium has “unaccountably lost their ability to do elementary game theory”, you seem to have not applied this lens yourself. I’d suggest asking yourself why you weren’t able to see this. In my experience, this has often been the case when I have a strong pre-existing belief and this makes it hard to see any potential flaws with this perspective unless I really make myself look.
”Moratorium won’t work. Monopoly won’t either. Freedom and transparency might.”—the word “might” is doing a lot of work here. You’ve vaguely gestured in a particular direction, but not really filled in the details. I think if you attempted to do that, you’d see that it’s hard to fill in the concrete details so that they work.
Lastly, this misses what I see as the crux of the issue, which is the offense-defense balance. I think advanced AI systems will heavily favour the attacker given that you only need, for example, one security flaw to completely compromise your opponent’s system. If this is the case, then everyone being at around about the same capability level won’t really help.
Of course the word “might” is doing a lot of work here! Because there is no guaranteed happy solution, the best we can do is steer away from futures we absolutely know we we do not want to be in, like a grinding totalitarianism rationalized by “We’re saving you from the looming threat of killer AIs!”
″ At least with the current system, corporations are able to test models before release”. The history of proprietary software does not inspire any confidence at all that this will be done adequately, or even at all; in a fight between time-to-market and software quality, getting their firstest almost always wins. It’s not reasonable to expect this to change simply because some people have strong opinions about AI risk.
OpenAI seems to have held off on the deployment of GPT4 for a number of months. They also brought on ARC evals and a bunch of experts to help evaluate the risks of releasing the model.