If his beliefs are what I would have expected them to be (eg something like “agrees with the basic arguments laid out in Superintelligence, and was motivated to follow his current carrer trajectory by those arguments”), then this answer is at best, misleading and misrepresentation of his actual models.
Seeing this particular example, I’m on the fence about whether to call it a “lie”. He was asked about the state of the world, not about his personal estimates, and he answered in a way that was more about the state of knowable public knowledge rather than his personal estimate. But I agree that seems pretty hair-splitting.
As it is, I notice that I’m confused.
Why wouldn’t he say something to the effect of the following?
I don’t know; this kind of forecasting is very difficult, timelines forecasting is very difficult. I can’t speak with confidence one way or the other. However, my best guess from following the literature on this topic for many years is that the catastrophic concerns are credible. I don’t know how probable it is, but does not seem to me that it is merely outlandish sci fi scenario that AI will lead to human extinction, and is not out of the question that that will happen in the next 10 years.
That doesn’t just seem more transparent, and more cooperative with the questioner, it also seems...like an obvious strategic move?
Does he not, in fact, by the basic arguments in Superingelligence? Is there some etiquette that he feels that he shouldn’t say that?
If his beliefs are what I would have expected them to be (eg something like “agrees with the basic arguments laid out in Superintelligence, and was motivated to follow his current carrer trajectory by those arguments”), then this answer is at best, misleading and misrepresentation of his actual models.
Seeing this particular example, I’m on the fence about whether to call it a “lie”. He was asked about the state of the world, not about his personal estimates, and he answered in a way that was more about the state of knowable public knowledge rather than his personal estimate. But I agree that seems pretty hair-splitting.
As it is, I notice that I’m confused.
Why wouldn’t he say something to the effect of the following?
That doesn’t just seem more transparent, and more cooperative with the questioner, it also seems...like an obvious strategic move?
Does he not, in fact, by the basic arguments in Superingelligence? Is there some etiquette that he feels that he shouldn’t say that?
What’s missing from my understanding here?