I’m co-organizing a workshop on a really interesting topic that’s very relevant for AI safety. We call it “Rebellion and Disobedience in AI”. If you’re doing work that could be relevant for us, please submit it! If you have questions or want to discuss the scope of this workshop, feel free to ask on this thread and I’ll try to answer.
Full CFP below:
Call for Participation: Workshop on Rebellion and Disobedience at AAMAS’23
This workshop will take place on May 29 or 30, 2023, as part of the AAMAS workshop program.
More details can be found on the workshop’s website:
RaD-AI agents are artificial agents (virtual or robots) that reason intelligently about why, when, and how to rebel and disobey their given commands. The need for agents to disobey contrasts with most existing research on collaborative robots and agents, where the definition of a “good” agent is one that complies with the commands it is given, and that works in a predictable manner under the consent of the human it serves. However, as exemplified in Issac Asimov’s Second Law of Robotics, this compliance is not always desired, such as when it might interfere with a human’s safety. While there has not been much prior research on RaD-AI, we identify main related topics, each of which is studied by a thriving subcommunity of AI, namely: Intelligent Social Agents, Human-Agent/Robot Interaction, and Societal Impacts. In each of these areas, there are research questions relevant to RaD-AI.
We are specifically interested in submissions in the following topics:
Intelligent Social Agents (including but not limited to: Goal Reasoning, Plan Recognition, Value Alignment, and Social Dilemmas)
Human-Agent/Robot Interaction (including but not limited to: Human-agent Trust, Interruptions, Deception, Command Rejection, and Explainability)
Societal Impacts (including but not limited to: Legal and Ethical Reasoning, Liability, AI safety, and AI governance)
CFP for Rebellion and Disobedience in AI workshop
Hi everyone!
I’m co-organizing a workshop on a really interesting topic that’s very relevant for AI safety. We call it “Rebellion and Disobedience in AI”. If you’re doing work that could be relevant for us, please submit it! If you have questions or want to discuss the scope of this workshop, feel free to ask on this thread and I’ll try to answer.
Full CFP below:
Call for Participation: Workshop on Rebellion and Disobedience at AAMAS’23
This workshop will take place on May 29 or 30, 2023, as part of the AAMAS workshop program.
More details can be found on the workshop’s website:
https://sites.google.com/view/rad-ai/home
RaD-AI agents are artificial agents (virtual or robots) that reason intelligently about why, when, and how to rebel and disobey their given commands. The need for agents to disobey contrasts with most existing research on collaborative robots and agents, where the definition of a “good” agent is one that complies with the commands it is given, and that works in a predictable manner under the consent of the human it serves. However, as exemplified in Issac Asimov’s Second Law of Robotics, this compliance is not always desired, such as when it might interfere with a human’s safety. While there has not been much prior research on RaD-AI, we identify main related topics, each of which is studied by a thriving subcommunity of AI, namely: Intelligent Social Agents, Human-Agent/Robot Interaction, and Societal Impacts. In each of these areas, there are research questions relevant to RaD-AI.
We are specifically interested in submissions in the following topics:
Intelligent Social Agents (including but not limited to: Goal Reasoning, Plan Recognition, Value Alignment, and Social Dilemmas)
Human-Agent/Robot Interaction (including but not limited to: Human-agent Trust, Interruptions, Deception, Command Rejection, and Explainability)
Societal Impacts (including but not limited to: Legal and Ethical Reasoning, Liability, AI safety, and AI governance)
Submission details:
The submission deadline is January 20, 2023.
Notifications will be sent on March 13, 2023.
The submission website is: https://easychair.org/cfp/radai23
Accepted submission types:
Regular Research Papers (6 to 8 pages)
Short Research Papers (up to 4 pages)
Position Papers (up to 2 pages)
Tool Talks (up to 2 pages)
Organizing Committee:
David Aha, Navy Center for Applied Research in AI; Naval Research Laboratory; Washington, DC; USA
Gordon Briggs, Navy Center for Applied Research in AI; Naval Research Laboratory; Washington, DC; USA
Reuth Mirsky, Department of Computer Science; Bar-Ilan University; Israel (mirskyr@cs.biu.ac.il)
Ram Rachum, Department of Computer Science; Bar-Ilan University; Israel
Kantwon L. Rogers, Department of Computer Science; Georgia Tech; USA
Peter Stone, The University of Texas at Austin; USA and Sony AI