The booklet is clearly written—very lucid and articulate, and pleasantly lacking the copious use of insider vocabulary that marks much of the writing of the MIRI community. It’s worth reading as an elegant representation of a certain perspective on the future of AGI, humanity and the world.
Having said that, though, I also have to add that I find some of the core ideas in the book highly unrealistic.
The title of this article summarizes one of my main disagreements. Armstrong seriously seems to believe that doing analytical philosophy (specifically, moral philosophy aimed at formalizing and clarifying human values so they can be used to structure AGI value systems) is likely to save the world.
What I expect from formal “analytic philosophy” methods:
1) A useful decomposition of the issue into problems and subproblems (eg AI goal stability, AI agency, reduced impact, correct physical models on the universe, correct models of fuzzy human concepts such as human beings, convergence or divergence of goals, etc...)
2) Full or partial solutions some of the subproblems, ideally of general applicability (so they can be added easily to any AI design).
3) A good understanding of the remaining holes.
and lastly:
4) Exposing the implicit assumptions in proposed (non-analytic) solutions to the AI risk problem, so that the naive approaches can be discarded and the better approaches improved.
Ben Goertzel’s review in H+ Magazine. Excerpt:
My response in the comment section:
What I expect from formal “analytic philosophy” methods:
1) A useful decomposition of the issue into problems and subproblems (eg AI goal stability, AI agency, reduced impact, correct physical models on the universe, correct models of fuzzy human concepts such as human beings, convergence or divergence of goals, etc...)
2) Full or partial solutions some of the subproblems, ideally of general applicability (so they can be added easily to any AI design).
3) A good understanding of the remaining holes.
and lastly:
4) Exposing the implicit assumptions in proposed (non-analytic) solutions to the AI risk problem, so that the naive approaches can be discarded and the better approaches improved.
Ben expanded his original article by editing a reply to your points into the end.
Sigh… I’ll have to get round to addressing that point (though I’ve already addressed it several times already).