Accurate and honest representation of Altman’s views as of 2015, particularly as this was before he had a personal financial or reputational stake in the development of AGI.
Since the regulation he called for has not come about, I’d think he’s following a different strategy now, probably develop it first so someone else doesn’t. I actually take his statements on the risk seriously. I think he probably believes, as stated, that alignment isn’t solved.
I think open AI has now hit the level of financial success that the profit motive is probably less than the reputational motive. I think that Altman probably thinks more about becoming a hero in the public eye and avoiding being a villain then he does about becoming the head of the largest company in history. But both will be a factor. In any case I think his concerns about alignment are sincere. That’s not enough of a guarantee that he’ll get it right as we close in on X-risk AI (XRAI seems a more precise term than AGI at this point), but is something.
He said that they weren’t training a GPT-5 and that they prefer to aim to adapt smaller AIs to society (I suppose they might still be researching on AGI regardless, just not training new models).
I thought that it might have been to slow down the race.
Very interesting, thanks for sharing.
Accurate and honest representation of Altman’s views as of 2015, particularly as this was before he had a personal financial or reputational stake in the development of AGI.
Since the regulation he called for has not come about, I’d think he’s following a different strategy now, probably develop it first so someone else doesn’t. I actually take his statements on the risk seriously. I think he probably believes, as stated, that alignment isn’t solved.
I think open AI has now hit the level of financial success that the profit motive is probably less than the reputational motive. I think that Altman probably thinks more about becoming a hero in the public eye and avoiding being a villain then he does about becoming the head of the largest company in history. But both will be a factor. In any case I think his concerns about alignment are sincere. That’s not enough of a guarantee that he’ll get it right as we close in on X-risk AI (XRAI seems a more precise term than AGI at this point), but is something.
He said that they weren’t training a GPT-5 and that they prefer to aim to adapt smaller AIs to society (I suppose they might still be researching on AGI regardless, just not training new models).
I thought that it might have been to slow down the race.