I would caveat that there are a decent fraction of alignment researchers that have pessimistic takes, though I agree this is not a consensus for the whole field. So there’s far from a consensus on optimistic takes (which I don’t think you were claiming, but that is one way your message can be interpreted).
I was surprised by this claim. To be concrete, what’s your probability of xrisk conditional on 10-year timelines? Mine is something like 25% I think, and higher than my unconditional probability of xrisk.
(Ideally we’d be clearer about what timelines we mean here, I’ll assume it’s TAI timelines for now.)
Conditional on 10-year timelines, maybe I’m at 20%? This is also higher than my unconditional probability of x-risk.
I’m not sure which part of my claim you’re surprised by? Given what you asked me, maybe you think that I think that 10-year timelines are safer than >10-year timelines? I definitely don’t believe that.
My understanding was that this post was suggesting that timelines are longer than 10 years, e.g. from sentences like this:
I’m not claiming that these Tool AI’s won’t eventually be dangerous, but I can’t see this path leading to high existential risk in the next decade or so.
And that’s the part I agree with (including their stated views about what will happen in the next 10 years).
I basically agree with this vision and I also agree that the many recent pessimistic takes are not representative of the field as a whole.
I would caveat that there are a decent fraction of alignment researchers that have pessimistic takes, though I agree this is not a consensus for the whole field. So there’s far from a consensus on optimistic takes (which I don’t think you were claiming, but that is one way your message can be interpreted).
I was surprised by this claim. To be concrete, what’s your probability of xrisk conditional on 10-year timelines? Mine is something like 25% I think, and higher than my unconditional probability of xrisk.
(Ideally we’d be clearer about what timelines we mean here, I’ll assume it’s TAI timelines for now.)
Conditional on 10-year timelines, maybe I’m at 20%? This is also higher than my unconditional probability of x-risk.
I’m not sure which part of my claim you’re surprised by? Given what you asked me, maybe you think that I think that 10-year timelines are safer than >10-year timelines? I definitely don’t believe that.
My understanding was that this post was suggesting that timelines are longer than 10 years, e.g. from sentences like this:
And that’s the part I agree with (including their stated views about what will happen in the next 10 years).