I think open source AI development is bad for humanity, and think one of the good things about the OpenAI team is that they seem to have realized this (tho perhaps for the wrong reasons).
I am curious about the counterfactual where the original team had realized being open was a mistake from the beginning (let’s call that hypothetical project WindfallAI, or whatever, after their charter clause). Would Elon not have funded it? Would some founders (or early employees) have decided not to join?
It seems clear that Elon would not have funded it? The idea might not have even existed, the effort might still be unified in Deepmind, with the rest of the space still in denial about immanent generality, for a lack of language demonstrations.
If you’re asking about deeper causality underlying why Elon has always thought that open source would be the safest route, I don’t know if there is any, that might have been a quirk opinion.
I think open source AI development is bad for humanity, and think one of the good things about the OpenAI team is that they seem to have realized this (tho perhaps for the wrong reasons).
I am curious about the counterfactual where the original team had realized being open was a mistake from the beginning (let’s call that hypothetical project WindfallAI, or whatever, after their charter clause). Would Elon not have funded it? Would some founders (or early employees) have decided not to join?
It seems clear that Elon would not have funded it? The idea might not have even existed, the effort might still be unified in Deepmind, with the rest of the space still in denial about immanent generality, for a lack of language demonstrations.
If you’re asking about deeper causality underlying why Elon has always thought that open source would be the safest route, I don’t know if there is any, that might have been a quirk opinion.