Facing the Intelligence Explosion was my own attempt at doing this project. I wrote most of it a long time ago and would write it pretty differently today, but it seems like it could accomplish much of what you’re hoping for.
“So You Want to Save the World” is embarrassingly inadequate, but still more helpful than nothing (I hope). I mostly want it to be read by people who already grok the problem (e.g. Benja, who used a reference from my article to do this) and will be able to ignore its embarassingness, not by newbies who are more likely to be turned away from the issues altogether by the fact that “So You Want to Save the World” is kinda embarrassing and still basically the only available survey of open problems in FAI.
But the best thing available (very soon) is probably actually Our Final Invention, which reads like a detective novel and covers most of the basic points.
What are the main things you find embarrassingly inadequate, and/or newbie-turning-away, about So You Want to Save the World (LW, off-site update)?
Since this rationalist mix tape isn’t directed at specialists, my goals for its ‘Open problems in FAI’ portion are:
Impress. Make it clear that FAI is a Real Thing that flesh-and-blood mathematicians are working on. Lots of scary-looking equations probably aren’t useful for this, but formal jargon certainly can be, since it makes us look more legit than terms like ‘Fun Theory’ or ‘Friendliness Theory’ would initially suggest. Relatedly...
Intimidate. Decrease armchair philosophers’ confidence that they can shrug off the Five Theses without thinking (and researching) long and hard about them. This is important, because I’ve set up my Introduction to appeal to people of philosophical, big-picture, let-me-figure-this-out-for-myself mindsets. Impatience, imprecision, and egomania are big failure modes for that demographic, so a lot of good can be done just by showing what patience, precision, and intellectual humility look like in this arena.
Ground. Increase people’s interest in pragmatic, solutions-oriented responses to the Friendliness problem, by providing a model for how such responses will look. Mitigate the airy and theoretical tendencies that EY’s writing can encourage (and that my target audience is already disposed toward), as well as various defeatism/nihilism/relativism attractors.
Intrigue. Have a couple of the easier-to-explain open problems tempt the reader into digging deeper.
Given the A,B,C,D goals I outlined above, do you think 1-4 are good objectives for the FAI Open Problems part of the Introduction? And does this goalset change how relevantly useful you think “So You Want to Face the World” is?
I like that “Intimidate” is explicitly one of your list of goals. In practice this is a key part of any attempt to introduce people to difficult problems without them dismissing them right away, but I’ve never used such a stark term for it before.
Facing the Intelligence Explosion was my own attempt at doing this project. I wrote most of it a long time ago and would write it pretty differently today, but it seems like it could accomplish much of what you’re hoping for.
“So You Want to Save the World” is embarrassingly inadequate, but still more helpful than nothing (I hope). I mostly want it to be read by people who already grok the problem (e.g. Benja, who used a reference from my article to do this) and will be able to ignore its embarassingness, not by newbies who are more likely to be turned away from the issues altogether by the fact that “So You Want to Save the World” is kinda embarrassing and still basically the only available survey of open problems in FAI.
But the best thing available (very soon) is probably actually Our Final Invention, which reads like a detective novel and covers most of the basic points.
Oooh, I’ll check out OFI.
What are the main things you find embarrassingly inadequate, and/or newbie-turning-away, about So You Want to Save the World (LW, off-site update)?
Since this rationalist mix tape isn’t directed at specialists, my goals for its ‘Open problems in FAI’ portion are:
Impress. Make it clear that FAI is a Real Thing that flesh-and-blood mathematicians are working on. Lots of scary-looking equations probably aren’t useful for this, but formal jargon certainly can be, since it makes us look more legit than terms like ‘Fun Theory’ or ‘Friendliness Theory’ would initially suggest. Relatedly...
Intimidate. Decrease armchair philosophers’ confidence that they can shrug off the Five Theses without thinking (and researching) long and hard about them. This is important, because I’ve set up my Introduction to appeal to people of philosophical, big-picture, let-me-figure-this-out-for-myself mindsets. Impatience, imprecision, and egomania are big failure modes for that demographic, so a lot of good can be done just by showing what patience, precision, and intellectual humility look like in this arena.
Ground. Increase people’s interest in pragmatic, solutions-oriented responses to the Friendliness problem, by providing a model for how such responses will look. Mitigate the airy and theoretical tendencies that EY’s writing can encourage (and that my target audience is already disposed toward), as well as various defeatism/nihilism/relativism attractors.
Intrigue. Have a couple of the easier-to-explain open problems tempt the reader into digging deeper.
Given the A,B,C,D goals I outlined above, do you think 1-4 are good objectives for the FAI Open Problems part of the Introduction? And does this goalset change how relevantly useful you think “So You Want to Face the World” is?
I like that “Intimidate” is explicitly one of your list of goals. In practice this is a key part of any attempt to introduce people to difficult problems without them dismissing them right away, but I’ve never used such a stark term for it before.
“So You Want to Save the World”...
is too informal and cheeky in tone
doesn’t describe the actual problems in any detail but just hand-waves in their direction with a few sorta-kinda relevant citations
doesn’t focus attention on the most important problems whatsoever
doesn’t explain some of the most important problems
is terrible for meeting your stated goals
OFI looks nice, but is there going to be a Kindle edition?
Amazon doesn’t list one, which I suppose could be an artifact of it not being out yet.