If I think that Sam is adopting an implausible and suspiciously rosy picture, then I should say so, right? And if Sam hasn’t made arguments that address the worries, then it’s at least among the top hypotheses that he’s just not taking them seriously, right? My original comment said that (on the basis of the essay, and lack of linked arguments). It sounds like you took that to mean that anyone who doesn’t think fast surprising takeoff is likely, must not understand the arguments. That’s not what I said.
I’m confused here, since while I definitely agree that AGI companies have terrible incentives for safety, I don’t see how this undermines DragonGod’s key point, exactly.
A better example of the problem with incentives is the incentive to downplay alignment difficulties.
What do you think DragonGod’s key point is? They haven’t argued against fast takeoff here. (Which is fine.) They seem to have misunderstood me as saying that no one who understands fast takeoff arguments would disagree that fast takeoff is likely, and then they’ve been defending their right to know about fast takeoff arguments and disagree that it’s likely.
I think a key point of DragonGod here is that the majority of the effort should go to scenarios that are likely to happen, and while fast takeoff deserves some effort, at this point it’s a mistake to expect Sam Altman to condition heavily on the fast takeoff, and not conditioning on it doesn’t make him irrational or ruled by incentives.
If I think that Sam is adopting an implausible and suspiciously rosy picture, then I should say so, right? And if Sam hasn’t made arguments that address the worries, then it’s at least among the top hypotheses that he’s just not taking them seriously, right? My original comment said that (on the basis of the essay, and lack of linked arguments). It sounds like you took that to mean that anyone who doesn’t think fast surprising takeoff is likely, must not understand the arguments. That’s not what I said.
I’m confused here, since while I definitely agree that AGI companies have terrible incentives for safety, I don’t see how this undermines DragonGod’s key point, exactly.
A better example of the problem with incentives is the incentive to downplay alignment difficulties.
What do you think DragonGod’s key point is? They haven’t argued against fast takeoff here. (Which is fine.) They seem to have misunderstood me as saying that no one who understands fast takeoff arguments would disagree that fast takeoff is likely, and then they’ve been defending their right to know about fast takeoff arguments and disagree that it’s likely.
I think a key point of DragonGod here is that the majority of the effort should go to scenarios that are likely to happen, and while fast takeoff deserves some effort, at this point it’s a mistake to expect Sam Altman to condition heavily on the fast takeoff, and not conditioning on it doesn’t make him irrational or ruled by incentives.
It does if he hasn’t engaged with the arguments.