I’d guess that reciprocal exchanges might work better for friends: I’ll read any m books you pick, so long as you read the n books I pick.
Less likely to get financial ick-factor, and it’s always possible that you’ll gain from reading the books they recommend.
Perhaps this could scale to public intellectuals where there’s either a feeling of trust or some verification mechanism (e.g. if the intellectual wants more people to read [some neglected X], and would willingly trade their time reading Y for a world where X were more widely appreciated).
Whether or not money is involved, I’m sceptical of the likely results for public intellectuals—or in general for people strongly attached to some viewpoint. The usual result seems to be a failure to engage with the relevant points. (perhaps not attacking head-on is the best approach: e.g. the asymmetrical weapons post might be a good place to start for Deutsch/Pinker)
Specifically, I’m thinking of David Deutsch speaking about AGI risk with Sam Harris: he just ends up telling a story where things go ok (or no worse than with humans), and the implicit argument is something like “I can imagine things going ok, and people have been incorrectly worried about things before, so this will probably be fine too”. Certainly Sam’s not the greatest technical advocate on the AGI risk side, but “I can imagine things going ok...” is a pretty general strategy.
The same goes for Steven Pinker, who spends nearly two hours with Stuart Russell on the FLI podcast, and seems to avoid actually thinking in favour of simply repeating the things he already believes. There’s quite a bit of [I can imagine things going ok...], [People have been wrong about downsides in the past...], and [here’s an argument against your trivial example], but no engagement with the more general points behind the trivial example.
Steven Pinker has more than enough intelligence to engage properly and re-think things, but he just pattern-matches any AI risk argument to [some scary argument that the future will be worse] and short-circuits to enlightenment-now cached thoughts. (to be fair to Steven, I imagine doing a book tour will tend to set related cached thoughts in stone, so this is a particularly hard case… but you’d hope someone who focuses on the way the brain works would realise this danger and adjust)
When you’re up against this kind of pattern-matching, I don’t think even the ideal book is likely to do much good. If two hours with Stuart Russell doesn’t work, it’s hard to see what would.
I think the advantage of reading a book over having a conversation is that you’re less concerned with saving face or “winning”, so can focus more on the actual argument.
That’s a good point, though I do still think you need the right motivation. Where you’re convinced you’re right, it’s very easy to skim past passages that are ‘obviously’ incorrect, and fail to question assumptions. (More generally, I do wonder what’s a good heuristic for this—clearly it’s not practical to constantly go back to first principles on everything; I’m not sure how to distinguish [this person is applying a poor heuristic] from [this person is applying a good heuristic to very different initial beliefs])
Perhaps the best would be a combination: a conversation which hopefully leaves you with the thought that you might be wrong, followed by the book to allow you to go into things on your own time without so much worry over losing face or winning.
Another point on the cause-for-optimism side is that being earnestly interested in knowing the truth is a big first step, and I think that description fits everyone mentioned so far.
I’d guess that reciprocal exchanges might work better for friends:
I’ll read any m books you pick, so long as you read the n books I pick.
Less likely to get financial ick-factor, and it’s always possible that you’ll gain from reading the books they recommend.
Perhaps this could scale to public intellectuals where there’s either a feeling of trust or some verification mechanism (e.g. if the intellectual wants more people to read [some neglected X], and would willingly trade their time reading Y for a world where X were more widely appreciated).
Whether or not money is involved, I’m sceptical of the likely results for public intellectuals—or in general for people strongly attached to some viewpoint. The usual result seems to be a failure to engage with the relevant points. (perhaps not attacking head-on is the best approach: e.g. the asymmetrical weapons post might be a good place to start for Deutsch/Pinker)
Specifically, I’m thinking of David Deutsch speaking about AGI risk with Sam Harris: he just ends up telling a story where things go ok (or no worse than with humans), and the implicit argument is something like “I can imagine things going ok, and people have been incorrectly worried about things before, so this will probably be fine too”. Certainly Sam’s not the greatest technical advocate on the AGI risk side, but “I can imagine things going ok...” is a pretty general strategy.
The same goes for Steven Pinker, who spends nearly two hours with Stuart Russell on the FLI podcast, and seems to avoid actually thinking in favour of simply repeating the things he already believes. There’s quite a bit of [I can imagine things going ok...], [People have been wrong about downsides in the past...], and [here’s an argument against your trivial example], but no engagement with the more general points behind the trivial example.
Steven Pinker has more than enough intelligence to engage properly and re-think things, but he just pattern-matches any AI risk argument to [some scary argument that the future will be worse] and short-circuits to enlightenment-now cached thoughts. (to be fair to Steven, I imagine doing a book tour will tend to set related cached thoughts in stone, so this is a particularly hard case… but you’d hope someone who focuses on the way the brain works would realise this danger and adjust)
When you’re up against this kind of pattern-matching, I don’t think even the ideal book is likely to do much good. If two hours with Stuart Russell doesn’t work, it’s hard to see what would.
I think the advantage of reading a book over having a conversation is that you’re less concerned with saving face or “winning”, so can focus more on the actual argument.
That’s a good point, though I do still think you need the right motivation. Where you’re convinced you’re right, it’s very easy to skim past passages that are ‘obviously’ incorrect, and fail to question assumptions.
(More generally, I do wonder what’s a good heuristic for this—clearly it’s not practical to constantly go back to first principles on everything; I’m not sure how to distinguish [this person is applying a poor heuristic] from [this person is applying a good heuristic to very different initial beliefs])
Perhaps the best would be a combination: a conversation which hopefully leaves you with the thought that you might be wrong, followed by the book to allow you to go into things on your own time without so much worry over losing face or winning.
Another point on the cause-for-optimism side is that being earnestly interested in knowing the truth is a big first step, and I think that description fits everyone mentioned so far.