I think it’s wise to assume Sam’s public projection of short timelines does not reflect private evidence or careful calibration. He’s a known deceiver, with exquisite political instincts, eloquent, and it’s his job to be bullish and keep the money and hype flowing and the talent incoming. One’s analysis of his words should begin with “what reaction is he trying to elicit from people like me, and how is he doing it?”
Agree, but not sure what you are implying. Is it, Sam is not as concerned about risks because the expected capabilities are lower than he publicly lets on, timelines are longer than indicated and hence we should be less concerned as well?
On the one hand this is consistent with Sam’s family planning. On the other hand, other OpenAI employees that are less publicly involved and perhaps have less marginal utility from hype messaging have consistent stories (e.g. roon, https://nitter.poast.org/McaleerStephen/status/1875380842157178994#m).
The implication is that you absolutely can’t take Altman at his bare word, especially when it comes to any statement he makes that, if true, would result in OpenAI getting more resources. Thus you need to a) apply some interpretative filter to everything Altman says, and b) listen to other people instead who don’t have a public track record of manipulation like Altman.
I think it’s wise to assume Sam’s public projection of short timelines does not reflect private evidence or careful calibration. He’s a known deceiver, with exquisite political instincts, eloquent, and it’s his job to be bullish and keep the money and hype flowing and the talent incoming. One’s analysis of his words should begin with “what reaction is he trying to elicit from people like me, and how is he doing it?”
Agree, but not sure what you are implying. Is it, Sam is not as concerned about risks because the expected capabilities are lower than he publicly lets on, timelines are longer than indicated and hence we should be less concerned as well?
On the one hand this is consistent with Sam’s family planning. On the other hand, other OpenAI employees that are less publicly involved and perhaps have less marginal utility from hype messaging have consistent stories (e.g. roon, https://nitter.poast.org/McaleerStephen/status/1875380842157178994#m).
The implication is that you absolutely can’t take Altman at his bare word, especially when it comes to any statement he makes that, if true, would result in OpenAI getting more resources. Thus you need to a) apply some interpretative filter to everything Altman says, and b) listen to other people instead who don’t have a public track record of manipulation like Altman.