The standard argument against open sourcing intermediate AI developments to the public is that it reduces the amount of lead time available to solve the inevitable safety problems that are going to crop up the first time a research lab finds their way to AGI. If, on the other hand, you have an architecture that’s a precursor to AGI, and you open source it, that means you must now develop a working implementation and fix all of the safety problems before FAIR is even able to do the first thing, which is probably impossible.
I would love to be the person who is on the side of “freedom” and “transparency”, but the standard argument seems pretty straightforwardly correct, at least to me.
The standard argument against open sourcing intermediate AI developments to the public is that it reduces the amount of lead time available to solve the inevitable safety problems that are going to crop up the first time a research lab finds their way to AGI. If, on the other hand, you have an architecture that’s a precursor to AGI, and you open source it, that means you must now develop a working implementation and fix all of the safety problems before FAIR is even able to do the first thing, which is probably impossible.
I would love to be the person who is on the side of “freedom” and “transparency”, but the standard argument seems pretty straightforwardly correct, at least to me.
I personally prefer taking a gamble on freedom instead of the certainty of a totalitarian regime.
I personally prefer taking a gamble on freedom instead of the certainty of a totalitarian regime.