It sounds to me like you are favoring the “everything’s going to be all right” conclusion quite heavily. You act like everything is going to be all right by default, and your arguments for why things will be all right aren’t very sophisticated.
Then, if you identify that as being a problem. you redesign your test harness.
And we will certainly identify it as being a problem because humans know everything and they never make mistakes.
I’m doubting whether the situation with no historical precedent will ever come to pass.
I see, similar to how housing prices will never drop? Have you read up on black swans?
We are venturing into uncharted territory here. Historical precedents provide very weak information.
It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn’t guaranteed to be bright must be selling something.
There’s a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it’s incorrect. I don’t think this is a very uncommon position though:
That’s a fair analysis of those two lines—though I didn’t say “anyone ”.
For evidence for “uncommon”, I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:
It sounds to me like you are favoring the “everything’s going to be all right” conclusion quite heavily. You act like everything is going to be all right by default, and your arguments for why things will be all right aren’t very sophisticated.
And we will certainly identify it as being a problem because humans know everything and they never make mistakes.
I see, similar to how housing prices will never drop? Have you read up on black swans?
We are venturing into uncharted territory here. Historical precedents provide very weak information.
No.
Yes.
I don’t think it is likely that the world will end in accidental apocalypse in the next century.
Few do—AFAICS—and the main proponents of the idea are usually selling something.
What level on the disagreement hierarchy would you rate this comment of yours?
http://www.paulgraham.com/disagree.html
It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn’t guaranteed to be bright must be selling something.
There’s a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it’s incorrect. I don’t think this is a very uncommon position though:
http://www.ted.com/talks/lang/en/martin_rees_asks_is_this_our_final_century.html
http://www.ted.com/talks/stephen_petranek_counts_down_to_armageddon.html
http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html
http://www.wired.com/wired/archive/8.04/joy.html
And Stephen Hawking on AI:
http://www.zdnet.com/news/stephen-hawking-humans-will-fall-behind-ai/116616
That’s a fair analysis of those two lines—though I didn’t say “anyone ”.
For evidence for “uncommon”, I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:
Number killed by molecular nanotech weapons: 5%.
Total killed by superintelligent AI: 5%.
Overall risk of extinction prior to 2100: 19%
Interesting data, thanks.