Based on the abstract, it’s not worth my time to read it.
Abstract. Insanity is doing the same thing over and over and expecting a
different result. “Friendly AI” (FAI) meets these criteria on four separate
counts by expecting a good result after: 1) it not only puts all of humanity’s
eggs into one basket but relies upon a totally new and untested basket, 2) it
allows fear to dictate our lives, 3) it divides the universe into us vs. them, and
finally 4) it rejects the value of diversity. In addition, FAI goal initialization
relies on being able to correctly calculate a “Coherent Extrapolated Volition of
Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal
Benevolence (RUB) is based upon established game theory and evolutionary
ethics and is simple, safe, stable, self-correcting, and sensitive to current human
thinking, intuitions, and feelings. Which strategy would you prefer to rest the
fate of humanity upon?
Based on the abstract, it’s not worth my time to read it.
Points 2), 3), and 4) are simply inane.
Upvoted, agreed, and addendum: Similarly inane is the cliche “insanity is doing the same thing over and over and expecting a different result.”