People: “Ah, yes. We should trust OpenAI with AGI.”
OpenAI: https://www.nytimes.com/2024/07/04/technology/openai-hack.html
“But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the F.B.I. or anyone else in law enforcement.”
Title: A funny feature of the AI doomster argument
If you ask them whether they are short the market, many will say there is no way to short the apocalypse. But of course you can benefit from pending signs of deterioration in advance. At the very least, you can short some markets, or go long volatility, and then send those profits to Somalia to mitigate suffering for a few years before the whole world ends.
Still, in a recent informal debate at the wonderful Roots of Progress conference in Berkeley, many of the doomsters insisted to me that “the end” will come as a complete surprise, given the (supposed) deceptive abilities of AGI.
But note what they are saying. If markets will not fall at least partially in advance, they are saying the passage of time, and the events along the way, will not persuade anyone. They are saying that further contemplation of their arguments will not persuade any marginal investors, whether directly or indirectly. They are predicting that their own ideas will not spread any further.
I take those as signs of a pretty weak argument. “It will never get more persuasive than it is right now!” “There’s only so much evidence for my argument, and never any more!” Of course, by now most intelligent North Americans with an interest in these issues have heard these arguments and they are most decidedly not persuaded.
There is also a funny epistemic angle here. If the next say twenty years of evidence and argumentation are not going to persuade anyone else at the margin, why should you be holding this view right now? What is it that you know, that is so resistant to spread and persuasion over the course of the next twenty years?
I would say that to ask such questions is to answer them.
Yes, that was a pretty terrible take. Markets quite clearly do not price externalities well, and never have done. So long as any given investor rates their specific investment as being unlikely to tip the balance into doom, they get the upside of directly financially benefiting from major economic growth due to AI, and essentially the same downside risk as if they didn’t invest. Arguments like “short some markets, or go long volatility, and then send those profits to Somalia to mitigate suffering for a few years before the whole world ends” are obviously not even trying to seriously reflect the widespread investment decisions that affect real markets.
This is a really good debate on AI doom—I thought the optimistic side was a good model that I (and maybe others) should spend more time thinking about (mostly about the mechanistic explanation vs extrapolation of trends and induction vs empiricist framings), even though I think I disagreed with a lot of it on an object level:
A good short post by Tyler Cowen on anti-AI Doomerism.
I recommend taking a minute to steelman the position before you decide to upvote or downvote this. Even if you disagree with the position object level, there is still value to knowing the models where you may be most mistaken.
People: “Ah, yes. We should trust OpenAI with AGI.” OpenAI: https://www.nytimes.com/2024/07/04/technology/openai-hack.html “But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the F.B.I. or anyone else in law enforcement.”
Non-paywall link: http://web.archive.org/web/20240709012837/https://www.nytimes.com/2024/07/04/technology/openai-hack.html
Good call. Thanks!
Tyler Cowen often has really good takes (even some good stuff against AI as an x-risk!), but this was not one of them: https://marginalrevolution.com/marginalrevolution/2024/10/a-funny-feature-of-the-ai-doomster-argument.html
Title: A funny feature of the AI doomster argument
If you ask them whether they are short the market, many will say there is no way to short the apocalypse. But of course you can benefit from pending signs of deterioration in advance. At the very least, you can short some markets, or go long volatility, and then send those profits to Somalia to mitigate suffering for a few years before the whole world ends.
Still, in a recent informal debate at the wonderful Roots of Progress conference in Berkeley, many of the doomsters insisted to me that “the end” will come as a complete surprise, given the (supposed) deceptive abilities of AGI.
But note what they are saying. If markets will not fall at least partially in advance, they are saying the passage of time, and the events along the way, will not persuade anyone. They are saying that further contemplation of their arguments will not persuade any marginal investors, whether directly or indirectly. They are predicting that their own ideas will not spread any further.
I take those as signs of a pretty weak argument. “It will never get more persuasive than it is right now!” “There’s only so much evidence for my argument, and never any more!” Of course, by now most intelligent North Americans with an interest in these issues have heard these arguments and they are most decidedly not persuaded.
There is also a funny epistemic angle here. If the next say twenty years of evidence and argumentation are not going to persuade anyone else at the margin, why should you be holding this view right now? What is it that you know, that is so resistant to spread and persuasion over the course of the next twenty years?
I would say that to ask such questions is to answer them.
Yes, that was a pretty terrible take. Markets quite clearly do not price externalities well, and never have done. So long as any given investor rates their specific investment as being unlikely to tip the balance into doom, they get the upside of directly financially benefiting from major economic growth due to AI, and essentially the same downside risk as if they didn’t invest. Arguments like “short some markets, or go long volatility, and then send those profits to Somalia to mitigate suffering for a few years before the whole world ends” are obviously not even trying to seriously reflect the widespread investment decisions that affect real markets.
This is a really good debate on AI doom—I thought the optimistic side was a good model that I (and maybe others) should spend more time thinking about (mostly about the mechanistic explanation vs extrapolation of trends and induction vs empiricist framings), even though I think I disagreed with a lot of it on an object level:
https://marginalrevolution.com/marginalrevolution/2024/11/austrian-economics-and-ai-scaling.html
A good short post by Tyler Cowen on anti-AI Doomerism.
I recommend taking a minute to steelman the position before you decide to upvote or downvote this. Even if you disagree with the position object level, there is still value to knowing the models where you may be most mistaken.