Heresy alert: Eliezer seems to be better at writing than he is at AI theory. Maybe he should write a big piece of SF about unfriendly and friendly AI to make these concepts as popular as Skynet or the Matrix. A textbook on rationality won’t have as much impact.
I don’t know that Eliezer Yudkowsky has spent much time talking about AI theory in this forum such that his competence would be obvious—but either way, the math of the decision theory is not as simple as “do what you are best at”.
Or the Da Vinci Code. EMP attacks, rogue AI researchers, counterfactual terrorists, conflicts between FAI coders, sudden breakthroughs in molecular nanotechnology, SL5 decision theory insights, the Bayesian Conspiracy, the Cooperative Conspiracy, bioweapons, mad scientists trying to make utility monsters to hack CEV, governmental restrictions on AI research, quantum immortality (to be used as a plot device), and maybe even a glimpse of fun theory. Add in a gratuitous romantic interest to teach the readers about the importance of humanity and the thousand shards of desire.
Oh, and the main character is Juergen Schmidhuber. YES.
By the way, writing such a book would probably lead to the destruction of the world, which is probably a major reason why Eliezer hasn’t done it.
Maybe he should write a big piece of SF about unfriendly and friendly AI to make these concepts as popular as Skynet or the Matrix.
I don’t think this would be a good strategy. In the general public, including the overwhelming part of the intelligentsia, SF associations are not exactly apt to induce intellectual respect and serious attention.
Heresy alert: Eliezer seems to be better at writing than he is at AI theory. Maybe he should write a big piece of SF about unfriendly and friendly AI to make these concepts as popular as Skynet or the Matrix. A textbook on rationality won’t have as much impact.
I don’t know that Eliezer Yudkowsky has spent much time talking about AI theory in this forum such that his competence would be obvious—but either way, the math of the decision theory is not as simple as “do what you are best at”.
It might not even be as simple as comparitive advantage, but there are certainly more good writers in the world than good AI theorists.
Or the Da Vinci Code. EMP attacks, rogue AI researchers, counterfactual terrorists, conflicts between FAI coders, sudden breakthroughs in molecular nanotechnology, SL5 decision theory insights, the Bayesian Conspiracy, the Cooperative Conspiracy, bioweapons, mad scientists trying to make utility monsters to hack CEV, governmental restrictions on AI research, quantum immortality (to be used as a plot device), and maybe even a glimpse of fun theory. Add in a gratuitous romantic interest to teach the readers about the importance of humanity and the thousand shards of desire.
Oh, and the main character is Juergen Schmidhuber. YES.
By the way, writing such a book would probably lead to the destruction of the world, which is probably a major reason why Eliezer hasn’t done it.
Marcus Hutter and the Prophets of Singularity. Works fine as a band name, too.
Stop that, you’ll make me think of a sequel to HP;MOR.
cousin_it:
I don’t think this would be a good strategy. In the general public, including the overwhelming part of the intelligentsia, SF associations are not exactly apt to induce intellectual respect and serious attention.
If you don’t have the weight of academia on your side, writing SF will work better than writing popsci books as Drexler did.