I note Musk was the first one to start a competitor, which seems to me to be very costly.
I think that founding OpenAI could be right if the non-profit structure was likely to work out. I don’t know if that made sense at the time. Altman has overpowered getting fired by the board, removed parts of the board, and rumor has it he is moving to a for-profit, which is strong evidence against the non-profit being able to withstand the pressures that were coming, but even without Altman I suspect it would still involve billions of $ of funding, partnerships like the one with Microsoft, and other for-profit pressures to be the sort of player it is today. So I don’t know that Musk’s plan was viable at all.
I suspect it would still involve billions of $ of funding, partnerships like the one with Microsoft, and other for-profit pressures to be the sort of player it is today. So I don’t know that Musk’s plan was viable at all.
Note that all of this happened before the scaling hypothesis was really formulated, much less made obvious.
We now know, with the benefit of hindsight that developing AI and it’s precursors is extremely compute intensive, which means capital intensive. There was some reason to guess this might be true at the time, but it wasn’t a forgone conclusion—it was still an open question if the key to AGI would be mostly some technical innovation that hadn’t been developed yet.
Thanks for expressing this perspective.
I note Musk was the first one to start a competitor, which seems to me to be very costly.
I think that founding OpenAI could be right if the non-profit structure was likely to work out. I don’t know if that made sense at the time. Altman has overpowered getting fired by the board, removed parts of the board, and rumor has it he is moving to a for-profit, which is strong evidence against the non-profit being able to withstand the pressures that were coming, but even without Altman I suspect it would still involve billions of $ of funding, partnerships like the one with Microsoft, and other for-profit pressures to be the sort of player it is today. So I don’t know that Musk’s plan was viable at all.
Note that all of this happened before the scaling hypothesis was really formulated, much less made obvious.
We now know, with the benefit of hindsight that developing AI and it’s precursors is extremely compute intensive, which means capital intensive. There was some reason to guess this might be true at the time, but it wasn’t a forgone conclusion—it was still an open question if the key to AGI would be mostly some technical innovation that hadn’t been developed yet.
Hm, but I note others at the time felt it clear that this would exacerbate the competition (1, 2).