So, Meta disbanded its responsible AI team. I hope this story reminds everyone about the dangers of acting rashly.
Firing Sam Altman was really a one time use card.
Microsoft probably threatened to pull its investments and compute which would let Sam Altman new competitor pull ahead regardless as OpenAI would be in an eviscerated state both in terms of funding and human capital. This move makes sense if you’re at the precipice of AGI, but not before that.
It was anyway weird that they had LeCun in charge and a thing called “Responsible AI team” in the same company. No matter what one thinks about Sam Altman now, compared to LeCun, the things he said about AI risks sounded 100 times more reasonable.
So, Meta disbanded its responsible AI team. I hope this story reminds everyone about the dangers of acting rashly.
Firing Sam Altman was really a one time use card.
Microsoft probably threatened to pull its investments and compute which would let Sam Altman new competitor pull ahead regardless as OpenAI would be in an eviscerated state both in terms of funding and human capital. This move makes sense if you’re at the precipice of AGI, but not before that.
Their Responsible AI team was in pretty bad shape after recent lay-offs. I think Facebook just decided to cut costs.
It was anyway weird that they had LeCun in charge and a thing called “Responsible AI team” in the same company. No matter what one thinks about Sam Altman now, compared to LeCun, the things he said about AI risks sounded 100 times more reasonable.
Meta’s actions seem unrelated?