Testing machines may not be “easy”—but it isn’t rocket science. You put the testee in a virtual world and test them there.
What if the testee realizes they are being tested and behaves differently than they would if unboxed?
Then, if you identify that as being a problem. you redesign your test harness.
Security by obscurity doesn’t work well even against humans, so it seems best to use schemes that work even if the testee knows everything about them.
Furthermore, do you think a group of monkeys could design a cage that would keep you trapped?
Probably not—but that isn’t a terribly good analogy to any problem we are likely to face.
An escaped criminal on the run doesn’t have much of chance of overtaking the whole of the rest of society and its technology.
Are there any historical cases of superintelligent escaped criminals?
Well, of course not—though I do seem to recall a tale of one General Zod.
You sound awfully confident about a scenario that has no historical precedent.
I’m doubting whether the situation with no historical precedent will ever come to pass. We have had escaped criminals in societies of their peers. In the future, we may still have some escaped criminals in societies of their peers - though hopefully a lot fewer.
What I don’t think we are likely to have is an escaped superintelligent criminal in an unadvanced society. Instead, I expect that a society able to produce such an agent will already be quite advanced—and that society as a whole will be able to advance faster than any escaped criminals will be able to manage—due to having more resources, manpower, etc.
It sounds to me like you are favoring the “everything’s going to be all right” conclusion quite heavily. You act like everything is going to be all right by default, and your arguments for why things will be all right aren’t very sophisticated.
Then, if you identify that as being a problem. you redesign your test harness.
And we will certainly identify it as being a problem because humans know everything and they never make mistakes.
I’m doubting whether the situation with no historical precedent will ever come to pass.
I see, similar to how housing prices will never drop? Have you read up on black swans?
We are venturing into uncharted territory here. Historical precedents provide very weak information.
It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn’t guaranteed to be bright must be selling something.
There’s a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it’s incorrect. I don’t think this is a very uncommon position though:
That’s a fair analysis of those two lines—though I didn’t say “anyone ”.
For evidence for “uncommon”, I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:
Then, if you identify that as being a problem. you redesign your test harness.
Probably not—but that isn’t a terribly good analogy to any problem we are likely to face.
Well, of course not—though I do seem to recall a tale of one General Zod.
I’m doubting whether the situation with no historical precedent will ever come to pass. We have had escaped criminals in societies of their peers. In the future, we may still have some escaped criminals in societies of their peers - though hopefully a lot fewer.
What I don’t think we are likely to have is an escaped superintelligent criminal in an unadvanced society. Instead, I expect that a society able to produce such an agent will already be quite advanced—and that society as a whole will be able to advance faster than any escaped criminals will be able to manage—due to having more resources, manpower, etc.
It sounds to me like you are favoring the “everything’s going to be all right” conclusion quite heavily. You act like everything is going to be all right by default, and your arguments for why things will be all right aren’t very sophisticated.
And we will certainly identify it as being a problem because humans know everything and they never make mistakes.
I see, similar to how housing prices will never drop? Have you read up on black swans?
We are venturing into uncharted territory here. Historical precedents provide very weak information.
No.
Yes.
I don’t think it is likely that the world will end in accidental apocalypse in the next century.
Few do—AFAICS—and the main proponents of the idea are usually selling something.
What level on the disagreement hierarchy would you rate this comment of yours?
http://www.paulgraham.com/disagree.html
It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn’t guaranteed to be bright must be selling something.
There’s a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it’s incorrect. I don’t think this is a very uncommon position though:
http://www.ted.com/talks/lang/en/martin_rees_asks_is_this_our_final_century.html
http://www.ted.com/talks/stephen_petranek_counts_down_to_armageddon.html
http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html
http://www.wired.com/wired/archive/8.04/joy.html
And Stephen Hawking on AI:
http://www.zdnet.com/news/stephen-hawking-humans-will-fall-behind-ai/116616
That’s a fair analysis of those two lines—though I didn’t say “anyone ”.
For evidence for “uncommon”, I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:
Number killed by molecular nanotech weapons: 5%.
Total killed by superintelligent AI: 5%.
Overall risk of extinction prior to 2100: 19%
Interesting data, thanks.