No argument from me on any of that. I’ve said a couple of times on LW that I don’t believe that people have or can have utility functions (I’m not alone on LW there), that approximating Solomonoff induction is impractical on at least an NP scale, and that large-world Bayesianism is tantamount to AGI, which nobody knows how to do. (Framing the AGI problem as LWB doesn’t help solve AGI, it’s an argument against expecting to succeed at LWB.)
But where does that leave us? What does one do instead?
To abandon “rationality” is to throw out the baby with the bathwater. The application of Bayes’ theorem to screening tests for rare conditions remains just as valid, just as practical, and just as essential for making correct medical decisions. Noticing that a discussion has wandered into disputing definitions remains just as valid a sign that the discussion has done wrong. The usefulness of checklists for ensuring that complex procedures are reliably performed does not go away. When thine I offends thee, try the many practical pieces of advice for personal development that have appeared here or elsewhere and see what one can make work. And so on.
All of this “small-world” rationality may not be as exciting, to some temperaments, as talking about AGIs, uploads, Tegmark multiverses, and 3^^^3 specks, but it has the great advantage (to my temperament) of having an actual subject matter here and now, and of not driving people crazy who take it seriously. It works when taken seriously. And there’s this to consider. It counts against the wilder speculations of LW just as much as against kobolds.
For the rest, my rule of thumb (the small-world response to large-world problems, as Diaconis says in that paper) for deciding its worth is this: if it isn’t being done with mathematics, it’s useless. A necessary condition, not a sufficient one, but it weeds out almost everything. AGI? Make one. Friendliness proof methods? Write it up in the Journal of Symbolic Logic. TDT? Ditto.
No argument from me on any of that. I’ve said a couple of times on LW that I don’t believe that people have or can have utility functions (I’m not alone on LW there), that approximating Solomonoff induction is impractical on at least an NP scale, and that large-world Bayesianism is tantamount to AGI, which nobody knows how to do. (Framing the AGI problem as LWB doesn’t help solve AGI, it’s an argument against expecting to succeed at LWB.)
But where does that leave us? What does one do instead?
To abandon “rationality” is to throw out the baby with the bathwater. The application of Bayes’ theorem to screening tests for rare conditions remains just as valid, just as practical, and just as essential for making correct medical decisions. Noticing that a discussion has wandered into disputing definitions remains just as valid a sign that the discussion has done wrong. The usefulness of checklists for ensuring that complex procedures are reliably performed does not go away. When thine I offends thee, try the many practical pieces of advice for personal development that have appeared here or elsewhere and see what one can make work. And so on.
All of this “small-world” rationality may not be as exciting, to some temperaments, as talking about AGIs, uploads, Tegmark multiverses, and 3^^^3 specks, but it has the great advantage (to my temperament) of having an actual subject matter here and now, and of not driving people crazy who take it seriously. It works when taken seriously. And there’s this to consider. It counts against the wilder speculations of LW just as much as against kobolds.
For the rest, my rule of thumb (the small-world response to large-world problems, as Diaconis says in that paper) for deciding its worth is this: if it isn’t being done with mathematics, it’s useless. A necessary condition, not a sufficient one, but it weeds out almost everything. AGI? Make one. Friendliness proof methods? Write it up in the Journal of Symbolic Logic. TDT? Ditto.