Thinking about this post for a bit shifted my view of Elon Musk a bit. He gets flack for calling for an AI pause, and then going and starting an AGI lab, and I now think that’s unfair.
I think his overall strategic takes are harmful, but I do credit him with being basically the only would-be AGI-builder who seems to me to be engaged in a reformative hypocrisy strategy. For one thing, it sounds like he went out of his way to try to get AI regulated (talking to congress, talking to the governors), and supported SB-1047.
I think it’s actually not that unreasonable to shout “Yo! This is dangerous! This should be regulated, and controlled democratically!”, see that that’s not happening, and then go and try do it in a way that you think is better.
That seems like possibly an example of “follower-conditional leadership.” Taking real action to shift to the better equilibrium, failing, and then going back to the dominant strategy given the inadequate equilibrium that exists.
Obviously he has different beliefs than I do, and than my culture does, about what is required for a good outcome. I think he’s still causing vast harms, but I think he doesn’t deserve the eye-roll for founding another AGI lab after calling for everyone to stop.
I note Musk was the first one to start a competitor, which seems to me to be very costly.
I think that founding OpenAI could be right if the non-profit structure was likely to work out. I don’t know if that made sense at the time. Altman has overpowered getting fired by the board, removed parts of the board, and rumor has it he is moving to a for-profit, which is strong evidence against the non-profit being able to withstand the pressures that were coming, but even without Altman I suspect it would still involve billions of $ of funding, partnerships like the one with Microsoft, and other for-profit pressures to be the sort of player it is today. So I don’t know that Musk’s plan was viable at all.
I suspect it would still involve billions of $ of funding, partnerships like the one with Microsoft, and other for-profit pressures to be the sort of player it is today. So I don’t know that Musk’s plan was viable at all.
Note that all of this happened before the scaling hypothesis was really formulated, much less made obvious.
We now know, with the benefit of hindsight that developing AI and it’s precursors is extremely compute intensive, which means capital intensive. There was some reason to guess this might be true at the time, but it wasn’t a forgone conclusion—it was still an open question if the key to AGI would be mostly some technical innovation that hadn’t been developed yet.
Thinking about this post for a bit shifted my view of Elon Musk a bit. He gets flack for calling for an AI pause, and then going and starting an AGI lab, and I now think that’s unfair.
I think his overall strategic takes are harmful, but I do credit him with being basically the only would-be AGI-builder who seems to me to be engaged in a reformative hypocrisy strategy. For one thing, it sounds like he went out of his way to try to get AI regulated (talking to congress, talking to the governors), and supported SB-1047.
I think it’s actually not that unreasonable to shout “Yo! This is dangerous! This should be regulated, and controlled democratically!”, see that that’s not happening, and then go and try do it in a way that you think is better.
That seems like possibly an example of “follower-conditional leadership.” Taking real action to shift to the better equilibrium, failing, and then going back to the dominant strategy given the inadequate equilibrium that exists.
Obviously he has different beliefs than I do, and than my culture does, about what is required for a good outcome. I think he’s still causing vast harms, but I think he doesn’t deserve the eye-roll for founding another AGI lab after calling for everyone to stop.
Thanks for expressing this perspective.
I note Musk was the first one to start a competitor, which seems to me to be very costly.
I think that founding OpenAI could be right if the non-profit structure was likely to work out. I don’t know if that made sense at the time. Altman has overpowered getting fired by the board, removed parts of the board, and rumor has it he is moving to a for-profit, which is strong evidence against the non-profit being able to withstand the pressures that were coming, but even without Altman I suspect it would still involve billions of $ of funding, partnerships like the one with Microsoft, and other for-profit pressures to be the sort of player it is today. So I don’t know that Musk’s plan was viable at all.
Note that all of this happened before the scaling hypothesis was really formulated, much less made obvious.
We now know, with the benefit of hindsight that developing AI and it’s precursors is extremely compute intensive, which means capital intensive. There was some reason to guess this might be true at the time, but it wasn’t a forgone conclusion—it was still an open question if the key to AGI would be mostly some technical innovation that hadn’t been developed yet.
Hm, but I note others at the time felt it clear that this would exacerbate the competition (1, 2).