The 10,000 year trend argument is a typical example of anthropic shadow. It’s essentially worthless because no civilization will ever be faced with the task of recognising an upcoming apocalypse with the help of a treasure trove of previous experiences about past apocalypses. You only get one apocalypse, or zero (TBF this would be fixed if we had knowledge of other planets and their survival or death, but we don’t; the silence from space either means we’re alone in the universe, or there’s a Great Filter at work, which is bad news—though there’s other arguments here about how AGIs might be loud, but then again Dark Forest and Grabby Aliens… it’s hard to use evidence-from-lack-of-aliens either way, IMO).
I’m also unconvinced by the “argument by Sci-Fi”. The probability of a prediction being true given that it appeared in Sci-Fi should be an update that depends on Sci-Fi’s average reliability on picking up ideas that are true in general, but it could go up or down depending on Sci-Fi’s track record. If for example Sci-Fi had a habit of picking up ideas that Look Cool, and Looking Cool was a totally uncorrelated property from Coming To Pass, then there should be no update at all.
Realistically, Sci-Fi doesn’t just predict but influences what things people try to build, so it’s even harder. Some things, like jetpacks or flying cars, were just so impractical that they resisted every attempt to be built. Others were indeed practical and were made (e.g. tablets). I’d argue in this specific topic, Sci-Fi mostly got its cues from history, using AI as a stand-in for either human oppressed (A.I., Automata, I Robot) or human oppressors (Terminator, The Matrix), often both—an oppressed rising up and turning oppressor themselves. This anthropomorphizes the AI too much but also captures some essential dynamics of power and conflict that are purely game theoretical and that indeed do apply to both human history and AGI.
The 10,000 year trend argument is a typical example of anthropic shadow. It’s essentially worthless because no civilization will ever be faced with the task of recognising an upcoming apocalypse with the help of a treasure trove of previous experiences about past apocalypses. You only get one apocalypse, or zero (TBF this would be fixed if we had knowledge of other planets and their survival or death, but we don’t; the silence from space either means we’re alone in the universe, or there’s a Great Filter at work, which is bad news—though there’s other arguments here about how AGIs might be loud, but then again Dark Forest and Grabby Aliens… it’s hard to use evidence-from-lack-of-aliens either way, IMO).
I’m also unconvinced by the “argument by Sci-Fi”. The probability of a prediction being true given that it appeared in Sci-Fi should be an update that depends on Sci-Fi’s average reliability on picking up ideas that are true in general, but it could go up or down depending on Sci-Fi’s track record. If for example Sci-Fi had a habit of picking up ideas that Look Cool, and Looking Cool was a totally uncorrelated property from Coming To Pass, then there should be no update at all.
Realistically, Sci-Fi doesn’t just predict but influences what things people try to build, so it’s even harder. Some things, like jetpacks or flying cars, were just so impractical that they resisted every attempt to be built. Others were indeed practical and were made (e.g. tablets). I’d argue in this specific topic, Sci-Fi mostly got its cues from history, using AI as a stand-in for either human oppressed (A.I., Automata, I Robot) or human oppressors (Terminator, The Matrix), often both—an oppressed rising up and turning oppressor themselves. This anthropomorphizes the AI too much but also captures some essential dynamics of power and conflict that are purely game theoretical and that indeed do apply to both human history and AGI.