I analogized AI doom to saying-the-fire-will-kill-the-dog; you analogize AI doom to saying-supernatural-shit-will-happen. Which analogy is more apt? That’s the crux.
OK, let’s zoom in on the fire analogy from the meme. The point I was making is that the doomers have been making plenty of falsifiable predictions which have in fact been coming true—e.g. that the fire will grow and grow, that the room will get hotter and brighter, and eventually the dog will burn to death. Falsifiability is relative. The fire-doomers have been making plenty of falsifiable predictions relative to interlocutors who were saying that the fire would disappear or diminish, but not making any falsifiable predictions relative to interlocutors who were saying that the fire would burn brighter and hotter but then somehow fail to kill the dog for some reason or other.
Similarly, the AGI doomers have been making plenty of falsifiable predictions relative to most people, e.g. people who didn’t think AGI was possible in our lifetimes, or people who thought it would be tool-like rather than agent-like, or modular rather than unified. But their predictions have been unfalsifiable relative to certain non-doomers who also predicted unified agentic AGI that for one reason or another wouldn’t cause doom.
OK. Next point: Falsifiability is symmetric. If my predictions are unfalsifiable relative to yours, yours are also unfalsifiable relative to me. Fundamentally what’s going on is simply that there are two groups of people whose expectations/predictions are the same (at least for now). This sort of thing happens all the time, and in fact is inescapable due to underdetermination of theory by data.
Next point: We have more resources to draw from besides falsifiability. We can evaluate arguments, we can build models and examine their assumptions, we can compare the simplicity and elegance of different models… All the usual tools of science for adjudicating disputes between different views that are both consistent with what we’ve seen so far.
Final point: AGI doomers have been unusually good about trying to stick their necks out and find ways that their views differ from the views of others. AGI doomers have been making models and arguments, and putting them in writing, and letting others attack premises and cast doubt on assumptions and poke holes in logic and cheer when a prediction turns out false.… more so than AGI non-doomers. Imagine what it would look like if the shoe was on the other foot: there’d be an “AGI non-doom” camp that would be making big multi-premise arguments about why AGI will probably go well by default, and making various predictions, and then a dispersed, disunified gaggle of doomer skeptics poking holes in the argument and saying “we don’t have enough evidence for such a crazy view” and “show me a falsifiable claim” and cheering when some prediction turns out false.
Putting it together… IMO you are using “y’all aren’t making falsifiable predictions” as a rhetorical cudgel with which to beat your opponents, when actually they would be equally justified in using it against you. It’s a symmetric weapon, it isn’t actually a component of good epistemology (at least not when used the way you are using it.)
I analogized AI doom to saying-the-fire-will-kill-the-dog; you analogize AI doom to saying-supernatural-shit-will-happen. Which analogy is more apt? That’s the crux.
OK, let’s zoom in on the fire analogy from the meme. The point I was making is that the doomers have been making plenty of falsifiable predictions which have in fact been coming true—e.g. that the fire will grow and grow, that the room will get hotter and brighter, and eventually the dog will burn to death. Falsifiability is relative. The fire-doomers have been making plenty of falsifiable predictions relative to interlocutors who were saying that the fire would disappear or diminish, but not making any falsifiable predictions relative to interlocutors who were saying that the fire would burn brighter and hotter but then somehow fail to kill the dog for some reason or other.
Similarly, the AGI doomers have been making plenty of falsifiable predictions relative to most people, e.g. people who didn’t think AGI was possible in our lifetimes, or people who thought it would be tool-like rather than agent-like, or modular rather than unified. But their predictions have been unfalsifiable relative to certain non-doomers who also predicted unified agentic AGI that for one reason or another wouldn’t cause doom.
OK. Next point: Falsifiability is symmetric. If my predictions are unfalsifiable relative to yours, yours are also unfalsifiable relative to me. Fundamentally what’s going on is simply that there are two groups of people whose expectations/predictions are the same (at least for now). This sort of thing happens all the time, and in fact is inescapable due to underdetermination of theory by data.
Next point: We have more resources to draw from besides falsifiability. We can evaluate arguments, we can build models and examine their assumptions, we can compare the simplicity and elegance of different models… All the usual tools of science for adjudicating disputes between different views that are both consistent with what we’ve seen so far.
Final point: AGI doomers have been unusually good about trying to stick their necks out and find ways that their views differ from the views of others. AGI doomers have been making models and arguments, and putting them in writing, and letting others attack premises and cast doubt on assumptions and poke holes in logic and cheer when a prediction turns out false.… more so than AGI non-doomers. Imagine what it would look like if the shoe was on the other foot: there’d be an “AGI non-doom” camp that would be making big multi-premise arguments about why AGI will probably go well by default, and making various predictions, and then a dispersed, disunified gaggle of doomer skeptics poking holes in the argument and saying “we don’t have enough evidence for such a crazy view” and “show me a falsifiable claim” and cheering when some prediction turns out false.
Putting it together… IMO you are using “y’all aren’t making falsifiable predictions” as a rhetorical cudgel with which to beat your opponents, when actually they would be equally justified in using it against you. It’s a symmetric weapon, it isn’t actually a component of good epistemology (at least not when used the way you are using it.)