A contest like this could be a nice way to put the rhetorical onus on AI researchers to demonstrate that their approach to AI can be safe. Instead of the Singularity Institute having to prove that AGI can potentially be dangerous, really AGI researchers should have to prove the opposite.
It’s also pretty digestible from a publicity standpoint. You don’t have to know anything about the intelligence explosion to notice that robots are being used in warfare and worry about this.
(I suspect that if SI found the right way to communicate their core message, they could persuade average people that AI research is dangerous pretty easily without any technical jargon or reference to science fiction concepts.)
And contestants will probably make at least some progress on Friendliness in the course of participating.
On the other hand, if the contest is easy and fails to reflect real-world friendliness challenges then its effect could be negative.
(I suspect that if SI found the right way to communicate their core message, they could persuade average people that AI research is dangerous pretty easily without any technical jargon or reference to science fiction concepts.)
I have no doubt of this. It’s not difficult to convince average people that a given technological innovation is dangerous. Whether doing so would cause more good than harm is a different question.
Instead of the Singularity Institute having to prove that AGI can potentially be dangerous, really AGI researchers should have to prove the opposite.
How’s about we prove that teens texting can not result in emergence of hivemind that would subsequently invent better hardware to run itself on, and rid of everyone?
How’s about you take AIXI, and analyze it, and see that it doesn’t relate itself to it’s computational substrate, subsequently being unable to understand self preservation? There are other, much more relevant ways of being safe than “ohh it talks so moral”.
Make a “moral expert system” contest
Have a set of moral dilemmas, and
1) Through an online form, humans say what choice they would make in that situation
2) There’s a contest to write a program that would choose like a human in those situations.
(Or alternatively, a program that given some of the choices that a human made, guesses which other choices he made in other situations)
A contest like this could be a nice way to put the rhetorical onus on AI researchers to demonstrate that their approach to AI can be safe. Instead of the Singularity Institute having to prove that AGI can potentially be dangerous, really AGI researchers should have to prove the opposite.
It’s also pretty digestible from a publicity standpoint. You don’t have to know anything about the intelligence explosion to notice that robots are being used in warfare and worry about this.
(I suspect that if SI found the right way to communicate their core message, they could persuade average people that AI research is dangerous pretty easily without any technical jargon or reference to science fiction concepts.)
And contestants will probably make at least some progress on Friendliness in the course of participating.
On the other hand, if the contest is easy and fails to reflect real-world friendliness challenges then its effect could be negative.
I have no doubt of this. It’s not difficult to convince average people that a given technological innovation is dangerous. Whether doing so would cause more good than harm is a different question.
How’s about we prove that teens texting can not result in emergence of hivemind that would subsequently invent better hardware to run itself on, and rid of everyone?
How’s about you take AIXI, and analyze it, and see that it doesn’t relate itself to it’s computational substrate, subsequently being unable to understand self preservation? There are other, much more relevant ways of being safe than “ohh it talks so moral”.