It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn’t guaranteed to be bright must be selling something.
There’s a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it’s incorrect. I don’t think this is a very uncommon position though:
That’s a fair analysis of those two lines—though I didn’t say “anyone ”.
For evidence for “uncommon”, I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:
What level on the disagreement hierarchy would you rate this comment of yours?
http://www.paulgraham.com/disagree.html
It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn’t guaranteed to be bright must be selling something.
There’s a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it’s incorrect. I don’t think this is a very uncommon position though:
http://www.ted.com/talks/lang/en/martin_rees_asks_is_this_our_final_century.html
http://www.ted.com/talks/stephen_petranek_counts_down_to_armageddon.html
http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html
http://www.wired.com/wired/archive/8.04/joy.html
And Stephen Hawking on AI:
http://www.zdnet.com/news/stephen-hawking-humans-will-fall-behind-ai/116616
That’s a fair analysis of those two lines—though I didn’t say “anyone ”.
For evidence for “uncommon”, I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:
Number killed by molecular nanotech weapons: 5%.
Total killed by superintelligent AI: 5%.
Overall risk of extinction prior to 2100: 19%
Interesting data, thanks.