(Ideally we’d be clearer about what timelines we mean here, I’ll assume it’s TAI timelines for now.)
Conditional on 10-year timelines, maybe I’m at 20%? This is also higher than my unconditional probability of x-risk.
I’m not sure which part of my claim you’re surprised by? Given what you asked me, maybe you think that I think that 10-year timelines are safer than >10-year timelines? I definitely don’t believe that.
My understanding was that this post was suggesting that timelines are longer than 10 years, e.g. from sentences like this:
I’m not claiming that these Tool AI’s won’t eventually be dangerous, but I can’t see this path leading to high existential risk in the next decade or so.
And that’s the part I agree with (including their stated views about what will happen in the next 10 years).
(Ideally we’d be clearer about what timelines we mean here, I’ll assume it’s TAI timelines for now.)
Conditional on 10-year timelines, maybe I’m at 20%? This is also higher than my unconditional probability of x-risk.
I’m not sure which part of my claim you’re surprised by? Given what you asked me, maybe you think that I think that 10-year timelines are safer than >10-year timelines? I definitely don’t believe that.
My understanding was that this post was suggesting that timelines are longer than 10 years, e.g. from sentences like this:
And that’s the part I agree with (including their stated views about what will happen in the next 10 years).