Agree, but not sure what you are implying. Is it, Sam is not as concerned about risks because the expected capabilities are lower than he publicly lets on, timelines are longer than indicated and hence we should be less concerned as well?
On the one hand this is consistent with Sam’s family planning. On the other hand, other OpenAI employees that are less publicly involved and perhaps have less marginal utility from hype messaging have consistent stories (e.g. roon, https://nitter.poast.org/McaleerStephen/status/1875380842157178994#m).
The implication is that you absolutely can’t take Altman at his bare word, especially when it comes to any statement he makes that, if true, would result in OpenAI getting more resources. Thus you need to a) apply some interpretative filter to everything Altman says, and b) listen to other people instead who don’t have a public track record of manipulation like Altman.
Agree, but not sure what you are implying. Is it, Sam is not as concerned about risks because the expected capabilities are lower than he publicly lets on, timelines are longer than indicated and hence we should be less concerned as well?
On the one hand this is consistent with Sam’s family planning. On the other hand, other OpenAI employees that are less publicly involved and perhaps have less marginal utility from hype messaging have consistent stories (e.g. roon, https://nitter.poast.org/McaleerStephen/status/1875380842157178994#m).
The implication is that you absolutely can’t take Altman at his bare word, especially when it comes to any statement he makes that, if true, would result in OpenAI getting more resources. Thus you need to a) apply some interpretative filter to everything Altman says, and b) listen to other people instead who don’t have a public track record of manipulation like Altman.