(I come from a machine learning background, and so I am predisposed to look down on the intelligent agents/cognitive modelling folks, but the project description in this press release just seems laughable. And if the goal of the research is to formalize moral reasoning, why the link to robotic/military systems, besides just to snatch up US military grants?)
I did not find the project so laughable. It’s hopelessly outdated in the sense that logical calculus does not deal with incomplete information, and I suspect that they simply conflate “moral” with “utilitarian” or even just “decision theoretic”.
It appears they are going with some kind of modal logic, which also does not appear to deal with incomplete information. I also suspect “moral” will be conflated with “utilitarian” or “utilitarian plus a diff”. But then there is this bit in the press release:
Bringsjord’s first step in designing ethically logical robots is translating moral theory into the language of logic and mathematics. A robot, or any machine, can only do tasks that can be expressed mathematically. With help from Rensselaer professor Mei Si, an expert in the computational modeling of emotions, the aim is to capture in “Vulcan” logic such emotions as vengefulness.
...which makes it sound like the utility function/moral framework will be even more ad hoc.
Possibly of local interest: Research on moral reasoning in intelligent agents by the Renssalear AI and Reasoning Lab.
(I come from a machine learning background, and so I am predisposed to look down on the intelligent agents/cognitive modelling folks, but the project description in this press release just seems laughable. And if the goal of the research is to formalize moral reasoning, why the link to robotic/military systems, besides just to snatch up US military grants?)
I did not find the project so laughable. It’s hopelessly outdated in the sense that logical calculus does not deal with incomplete information, and I suspect that they simply conflate “moral” with “utilitarian” or even just “decision theoretic”.
It appears they are going with some kind of modal logic, which also does not appear to deal with incomplete information. I also suspect “moral” will be conflated with “utilitarian” or “utilitarian plus a diff”. But then there is this bit in the press release:
...which makes it sound like the utility function/moral framework will be even more ad hoc.
Unfortunately, in the future we will be allowed to have only the emotions Mei Si was an expert of :/