He who gets to choose which thing is baseline and which thing gets the burden of proof, is the sovereign.
(That said I agree that burden of proof is on people claiming AGI is a thing, that it is happening soon probably, and that it’ll probably be existential catastrophe. But I think the burden of proof is much lighter than the weight of arguments and evidence that has accumulated so far to meet it.)
I’d be interested to hear your take on this article.
OK, fair. Well, as I always say these days, quite a lot of my views flow naturally from my AGI timelines. It’s reasonable to be skeptical that AGI is coming in about 4 years, but once you buy that premise, basically everything else I believe becomes pretty plausible. In particular, if you think AGI is coming in 2027, it probably seems pretty plausible that humanity will be unprepared & more likely than not that things will go very badly. Would you agree?
I’m happy to define it more specifically—e.g. if you have time, check out What 2026 Looks Like and then imagine that in 2027 the chatbots finally become superhuman at all relevant intellectual domains (including agency / goal-directedness / coherence) whereas before they had been superhuman in some but subhuman in others. That’s the sort of scenario I think is likely. It’s a further question whether or not the AGIs would be aligned, to be fair. But much has been written on that topic as well.
He who gets to choose which thing is baseline and which thing gets the burden of proof, is the sovereign.
(That said I agree that burden of proof is on people claiming AGI is a thing, that it is happening soon probably, and that it’ll probably be existential catastrophe. But I think the burden of proof is much lighter than the weight of arguments and evidence that has accumulated so far to meet it.)
I’d be interested to hear your take on this article.
OK, fair. Well, as I always say these days, quite a lot of my views flow naturally from my AGI timelines. It’s reasonable to be skeptical that AGI is coming in about 4 years, but once you buy that premise, basically everything else I believe becomes pretty plausible. In particular, if you think AGI is coming in 2027, it probably seems pretty plausible that humanity will be unprepared & more likely than not that things will go very badly. Would you agree?
I’m happy to define it more specifically—e.g. if you have time, check out What 2026 Looks Like and then imagine that in 2027 the chatbots finally become superhuman at all relevant intellectual domains (including agency / goal-directedness / coherence) whereas before they had been superhuman in some but subhuman in others. That’s the sort of scenario I think is likely. It’s a further question whether or not the AGIs would be aligned, to be fair. But much has been written on that topic as well.