I like this framework, but it also reminds me a case of informed consent failure: “This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies. Find out more”—and other user agreements which nobody reads.
Anyway, to make a robot which is able to discern different types oа consent is AI safety complete task—so AI safety should be solved before this robot arrive to the home of the user. I explored a similar model in “Dangerous value learners.”
Rephrasing a command is a good way to ensure understanding and to establish the consent, like in case: Alice: “I want coffee in bed”; Robot: “Do you want it to be poured in bed”?
I like this framework, but it also reminds me a case of informed consent failure: “This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies. Find out more”—and other user agreements which nobody reads.
Anyway, to make a robot which is able to discern different types oа consent is AI safety complete task—so AI safety should be solved before this robot arrive to the home of the user. I explored a similar model in “Dangerous value learners.”
Rephrasing a command is a good way to ensure understanding and to establish the consent, like in case: Alice: “I want coffee in bed”; Robot: “Do you want it to be poured in bed”?