I am probably saying the obvious, but this is not a guide to develop a Friendly AI.
It’s more like a list of things that people who consider themselves ethical should think about when developing an autonomously driving car, or a drone, or a data-mining program that works with personal data.
And I don’t feel impressed by it, but I am not sure what else could they have written instead to impress me more. Considering it’s a document produced by a committee, it could have been much worse. Maybe we should have higher standards for a technical committee, but it was not realistic to expect them to provide a technical implementation of robotic ethics.
Considering it’s a document produced by a committee, it could have been much worse.
I think people underestimate the degree to which 90% of everything is showing up; the section that Kaj was excited about (section 4) has its author list on page 120, and it’s names that either are or should be familiar:
Malo Bourgon (Co-Chair) – COO, Machine Intelligence Research Institute
Richard Mallah (Co-Chair) – Director of Advanced Analytics, Cambridge Semantics; Director of AI Projects, Future of Life Institute
Paul Christiano – PhD Student, Theory of Computing Group, UC Berkeley
Bart Selman – Professor of Computer Science, Cornell University
Carrick Flynn – Research Assistant at Future of Humanity Institute, University of Oxford
Roman Yampolskiy, PhD – Associate Professor and Director, Cyber Security Laboratory; Computer Engineering and Computer Science, University of Louisville
And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting—Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output—otherwise I’m confident it would have been terrible ;)
Considering it’s a document produced by a committee, it could have been much worse.
This is what jumped out at me pretty strongly. In general I have been surprised by the non-terribleness of stuff like this and the White House thing considering the kind of bullshit that many academically ordained AI experts were spouting in the last few years when confronted with AI safety arguments.
edit: some parts do look laughably bad, which undermines how serious anyone serious takes this.
I am probably saying the obvious, but this is not a guide to develop a Friendly AI.
It’s more like a list of things that people who consider themselves ethical should think about when developing an autonomously driving car, or a drone, or a data-mining program that works with personal data.
And I don’t feel impressed by it, but I am not sure what else could they have written instead to impress me more. Considering it’s a document produced by a committee, it could have been much worse. Maybe we should have higher standards for a technical committee, but it was not realistic to expect them to provide a technical implementation of robotic ethics.
I think people underestimate the degree to which 90% of everything is showing up; the section that Kaj was excited about (section 4) has its author list on page 120, and it’s names that either are or should be familiar:
And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting—Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output—otherwise I’m confident it would have been terrible ;)
This is what jumped out at me pretty strongly. In general I have been surprised by the non-terribleness of stuff like this and the White House thing considering the kind of bullshit that many academically ordained AI experts were spouting in the last few years when confronted with AI safety arguments.
edit: some parts do look laughably bad, which undermines how serious anyone serious takes this.