His argument was that in cases where some kind of infinity is on the table, aiming to satisfice rather than optimize can be the better strategy.
Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?
Hmm… it appears not. So I don’t think that helps us.
Where did you get the term “satisfice”? I just read that dutch-book post, and while Eliezer points out the flaw in demanding that the Bayesian take the infinite bet, I didn’t see the word ‘satisficing’ in their anywhere.
Huh, I must have “remembered” that term into the post. What I mean is more succinctly put in this comment.
Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?
Hmm… it appears not. So I don’t think that helps us.
This question still confuses me, though; if it’s a reasonable strategy to stop at N in the infinite case, but not a reasonable strategy to stop at N if there are only N^^^N iterations… something about it disturbs me, and I’m not sure that Eliezer’s answer is actually a good patch for the St. Petersburg Paradox.
It’s an old AI term meaning roughly “find a solution that isn’t (likely) optimal, but good enough for some purpose, without too much effort”. It implies that either your computer is too slow for it to be economical to find the true optimum under your models, or that you’re too dumb to come up with the right models, thus the popularity of the idea in AI research.
You can be impressed if someone starts with a criteria for what “good enough” means, and then comes up with a method they can prove meets the criteria. Otherwise it’s spin.
I’m more used to it as a psychology (or behavior econ) term for a specific, psychologically realistic, form of bounded rationality. In particular, I’m used to it being negative! (that is, a heuristic which often degenerates produces a bias)
Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?
Hmm… it appears not. So I don’t think that helps us.
Where did you get the term “satisfice”? I just read that dutch-book post, and while Eliezer points out the flaw in demanding that the Bayesian take the infinite bet, I didn’t see the word ‘satisficing’ in their anywhere.
Huh, I must have “remembered” that term into the post. What I mean is more succinctly put in this comment.
This question still confuses me, though; if it’s a reasonable strategy to stop at N in the infinite case, but not a reasonable strategy to stop at N if there are only N^^^N iterations… something about it disturbs me, and I’m not sure that Eliezer’s answer is actually a good patch for the St. Petersburg Paradox.
It’s an old AI term meaning roughly “find a solution that isn’t (likely) optimal, but good enough for some purpose, without too much effort”. It implies that either your computer is too slow for it to be economical to find the true optimum under your models, or that you’re too dumb to come up with the right models, thus the popularity of the idea in AI research.
You can be impressed if someone starts with a criteria for what “good enough” means, and then comes up with a method they can prove meets the criteria. Otherwise it’s spin.
I’m more used to it as a psychology (or behavior econ) term for a specific, psychologically realistic, form of bounded rationality. In particular, I’m used to it being negative! (that is, a heuristic which often degenerates produces a bias)