Until very recently I thought it might be just me and that you people can calculate what you should do. But then I learnt that even important SI donors have similar problems. And other people as well.
‘Some years ago I was trying to decide whether or not to move to Harvard from Stanford. I had bored my friends silly with endless discussion. Finally, one of them said, “You’re one of our leading decision theorists. Maybe you should make a list of the costs and benefits and try to roughly calculate your expected utility.” Without thinking, I blurted out, “Come on, Sandy, this is serious.”’
No argument from me on any of that. I’ve said a couple of times on LW that I don’t believe that people have or can have utility functions (I’m not alone on LW there), that approximating Solomonoff induction is impractical on at least an NP scale, and that large-world Bayesianism is tantamount to AGI, which nobody knows how to do. (Framing the AGI problem as LWB doesn’t help solve AGI, it’s an argument against expecting to succeed at LWB.)
But where does that leave us? What does one do instead?
To abandon “rationality” is to throw out the baby with the bathwater. The application of Bayes’ theorem to screening tests for rare conditions remains just as valid, just as practical, and just as essential for making correct medical decisions. Noticing that a discussion has wandered into disputing definitions remains just as valid a sign that the discussion has done wrong. The usefulness of checklists for ensuring that complex procedures are reliably performed does not go away. When thine I offends thee, try the many practical pieces of advice for personal development that have appeared here or elsewhere and see what one can make work. And so on.
All of this “small-world” rationality may not be as exciting, to some temperaments, as talking about AGIs, uploads, Tegmark multiverses, and 3^^^3 specks, but it has the great advantage (to my temperament) of having an actual subject matter here and now, and of not driving people crazy who take it seriously. It works when taken seriously. And there’s this to consider. It counts against the wilder speculations of LW just as much as against kobolds.
For the rest, my rule of thumb (the small-world response to large-world problems, as Diaconis says in that paper) for deciding its worth is this: if it isn’t being done with mathematics, it’s useless. A necessary condition, not a sufficient one, but it weeds out almost everything. AGI? Make one. Friendliness proof methods? Write it up in the Journal of Symbolic Logic. TDT? Ditto.
Until very recently I thought it might be just me and that you people can calculate what you should do. But then I learnt that even important SI donors have similar problems. And other people as well.
The problem is that all the talk about approximations is complete handwaving and that you really can’t calculate shit. And even if you could, there doesn’t seem to be anything medium-probable that you could do about it.
— Persi Diaconis, in The Problem of Thinking Too Much
No argument from me on any of that. I’ve said a couple of times on LW that I don’t believe that people have or can have utility functions (I’m not alone on LW there), that approximating Solomonoff induction is impractical on at least an NP scale, and that large-world Bayesianism is tantamount to AGI, which nobody knows how to do. (Framing the AGI problem as LWB doesn’t help solve AGI, it’s an argument against expecting to succeed at LWB.)
But where does that leave us? What does one do instead?
To abandon “rationality” is to throw out the baby with the bathwater. The application of Bayes’ theorem to screening tests for rare conditions remains just as valid, just as practical, and just as essential for making correct medical decisions. Noticing that a discussion has wandered into disputing definitions remains just as valid a sign that the discussion has done wrong. The usefulness of checklists for ensuring that complex procedures are reliably performed does not go away. When thine I offends thee, try the many practical pieces of advice for personal development that have appeared here or elsewhere and see what one can make work. And so on.
All of this “small-world” rationality may not be as exciting, to some temperaments, as talking about AGIs, uploads, Tegmark multiverses, and 3^^^3 specks, but it has the great advantage (to my temperament) of having an actual subject matter here and now, and of not driving people crazy who take it seriously. It works when taken seriously. And there’s this to consider. It counts against the wilder speculations of LW just as much as against kobolds.
For the rest, my rule of thumb (the small-world response to large-world problems, as Diaconis says in that paper) for deciding its worth is this: if it isn’t being done with mathematics, it’s useless. A necessary condition, not a sufficient one, but it weeds out almost everything. AGI? Make one. Friendliness proof methods? Write it up in the Journal of Symbolic Logic. TDT? Ditto.