I don’t predict a superintelligent singleton (having fused with the other AIs) would need to design a bioweapon or otherwise explicitly kill everyone. I expect it to simply transition into using more efficient tools than humans, and transfer the existing humans into hyperdomestication programs
+1, this is clearly a lot more likely than the alignment process missing humans entirely IMO
It’s not really a well-defined thing, which is why the standard on this site is to taboo those words and just explain what your lines of evidence are, or the motivation for any special priors if you have them.
So, your claim is that interest rates would be very high if AGI were imminent, and they’re not so it’s not. The last time someone said this, if the people arguing in the comment section had simply made a bet on interest rates changing, they would have made a lot of money! Ditto for buying up AI-related stocks or call options on those stocks.
I think you’re just overestimating the ability of the market to generalize to out of distribution events. Prices are set by a market’s participants, and the institutions with the ability to move prices are mostly not thinking about AGI timelines at present. It wouldn’t matter if AGI was arriving in five or ten or twenty years, Bridgewater would be basically doing the same things, and so their inaction doesn’t provide much evidence. Inherent in these forecasts there are also naturally going to be a lot of assumptions about the value of money (or titles to partial ownership of companies controlled by Sam Altmans) in a post-AGI scenario. These are pretty well-disputed premises, to say the least, which makes interpreting current market prices hard.
The issue is, ML research itself is composed of many tasks that do take less than a month for humans to execute. For example, on this model, sometime before “idea generation”, you’re going to have a model that can do most high-context software engineering tasks. The research department at any of the big AI labs would be able to do more stuff if it had such a model. So while current AI is not accelerating machine learning research that much, as it gets better, the trend line from the METR paper is going to curl upward.
You could say that the “inventing important new ideas” part is going to be such a heavy bottleneck, that this speedup won’t amount to much. But I think that’s mostly wrong, and that if you asked ML researchers at OpenAI, a drop in remote worker that could “only” be directed to do things that otherwise took 12 hours would speed up their work by a lot.
It’s actually not circular at all. “Current AI research” has taken us from machines that can’t talk to machines that can talk, write computer programs, give advice, etc. in about five years. That’s the empirical evidence that you can make research progress doing “random” stuff. In the absence of further evidence, people are just expecting the thing that has happened over the last five years to continue. You can reject that claim, but at this point I think the burden of proof is on the people that do.