Fair enough, “trivial” overstates the case. I do think it is overwhelmingly likely.
That said, I’m not sure how much we actually disagree on this? I was mostly trying to highlight the gap between an AI have a capability and us having the control to use an AI to usefully benefit from that capability.
I personally agree that on the default trajectory it’s very likely that at the point where AIs are quite existentially dangerous (in the absense of serious countermeasures) they also are capable of being very useful (though misalignment might make them hard to use).
However, I think this is a key disagreement I have with more pessimistic people who think that at the point where models become useful, they’re also qualitiatively wildly superhumanly dangerous. And this implies (assuming some rough notion of continuity) that there were earlier AIs which weren’t very useful but which were still dangerous in some ways.
Yeah, there are lots of ways to be useful, and not all require any superhuman capabilities. How much is broadly-effective intelligence vs targeted capabilities development (seems like more the former lately), how much is cheap-but-good-enough compared to humans vs better-than-human along some axis, etc.
Fair enough, “trivial” overstates the case. I do think it is overwhelmingly likely.
That said, I’m not sure how much we actually disagree on this? I was mostly trying to highlight the gap between an AI have a capability and us having the control to use an AI to usefully benefit from that capability.
I personally agree that on the default trajectory it’s very likely that at the point where AIs are quite existentially dangerous (in the absense of serious countermeasures) they also are capable of being very useful (though misalignment might make them hard to use).
However, I think this is a key disagreement I have with more pessimistic people who think that at the point where models become useful, they’re also qualitiatively wildly superhumanly dangerous. And this implies (assuming some rough notion of continuity) that there were earlier AIs which weren’t very useful but which were still dangerous in some ways.
Yeah, there are lots of ways to be useful, and not all require any superhuman capabilities. How much is broadly-effective intelligence vs targeted capabilities development (seems like more the former lately), how much is cheap-but-good-enough compared to humans vs better-than-human along some axis, etc.