Considering it’s a document produced by a committee, it could have been much worse.
I think people underestimate the degree to which 90% of everything is showing up; the section that Kaj was excited about (section 4) has its author list on page 120, and it’s names that either are or should be familiar:
Malo Bourgon (Co-Chair) – COO, Machine Intelligence Research Institute
Richard Mallah (Co-Chair) – Director of Advanced Analytics, Cambridge Semantics; Director of AI Projects, Future of Life Institute
Paul Christiano – PhD Student, Theory of Computing Group, UC Berkeley
Bart Selman – Professor of Computer Science, Cornell University
Carrick Flynn – Research Assistant at Future of Humanity Institute, University of Oxford
Roman Yampolskiy, PhD – Associate Professor and Director, Cyber Security Laboratory; Computer Engineering and Computer Science, University of Louisville
And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting—Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output—otherwise I’m confident it would have been terrible ;)
I think people underestimate the degree to which 90% of everything is showing up; the section that Kaj was excited about (section 4) has its author list on page 120, and it’s names that either are or should be familiar:
And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting—Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output—otherwise I’m confident it would have been terrible ;)