I think I care a bunch about the subject matter of this post, but something about the way this post is written leaves me feeling confused and ungrounded.
Before reading this post, my background beliefs were:
There’s probably some laws of cognition to be discovered, about what sort of cognition will have various good properties, in idealized situations.
There’s probably some messier laws of cognition that apply to humans (but those laws are maybe more complicated).
Neither sets of laws necessarily have a simple unifying framework that accomplishes All the Things (although I think the search for simplicity/elegance/all-inclusiveness is probably a productive search, i.e. it tends to yield good stuff along the way. “More elegance” is usually achievable on the margin.
There might be heuristics that work moderately well for humans much of the time, which approximate those laws.
there are probably Very Rough heuristics you can tell an average person without lots of dependencies, and somewhat better heuristics you can give to people who are willing to learn lots of subskills.
Given all that… is there anything in-particular I am meant to take from this post? (I have right now only skimmed it, it felt effortful to comb for the novel bits). I can’t tell whether the few concrete bits are particularly important, or just illustrative examples.
The key claim is: You can’t evaluate which beliefs and decision theory to endorse just by asking “which ones perform the best?” Because the whole question is what it means to systematically perform better, under uncertainty. Every operationalization of “systematically performing better” we’re aware of is either:
Incomplete — like “avoiding dominated strategies”, which leaves a lot unconstrained;
A poorly motivated proxy for the performance we actually care about — like “doing what’s worked in the past”; or
Secretly smuggling in nontrivial non-pragmatic assumptions — like “doing what’s worked in the past, not because that’s what we actually care about, but because past performance predicts future performance”
This is what we meant to convey with this sentence: “On any way of making sense of those words, we end up either calling a very wide range of beliefs and decisions “rational”, or reifying an objective that has nothing to do with our terminal goals without some substantive assumptions.”
(I can’t tell from your comment if you agree with all of that. But, if this was all obvious to you, great! But we’ve often had discussions where someone appealed to “which ones perform the best?” in a way that misses these points.)
My understanding from discussions with the authors (but please correct me):
This post is less about pragmatically analyzing which particular heuristics work best for ideal or non-ideal agents in common environments (assuming a background conception of normativity), and more about the philosophical underpinnings of normativity itself.
Maybe it’s easiest if I explain what this post grows out of:
There seems to be a widespread vibe amongst rationalists that “one-boxing in Newcomb is objectively better, because you simply obtain more money, that is, you simply win”. This vibe is no coincidence, since Eliezer and Nate, in some of their writing about FDT, use language strongly implying that decision theory A is objectively better than decision theory B because it just wins more. Unfortunately, this intuitive notion of winning cannot actually be made into a philosophically valid objective metric. (In more detail, a precise definition of winning is already decision-theory-complete, so these arguments beg the question.) This point is well-known in philosophical academia, and was already succinctly explained in a post by Caspar (which the authors mention).
In the current post, the authors extend a similar philosophical critique to other widespread uses of winning, or background assumptions about rationality. For example, some people say that “winning is about not playing dominated strategies”… and the authors agree about avoiding dominated strategies, but point out that this is not too action-guiding, because it is consistent with many policies. Or also, some people say that “rationality is about implementing the heuristics that have worked well in the past, and/or you think will lead to good future performance”… but these utterances hide other philosophical assumptions, like assuming the same mechanisms are at play in the past and future, which are especially tenuous for big problems like x-risk. Thus, vague references to winning aren’t enough to completely pin down and justify behavior. Instead, we fundamentally need additional constraints or principles about normativity, what the authors call non-pragmatic principles. Of course, these principles cannot themselves be justified in terms of past performance (which would lead to circularity), so they instead need to be taken as normative axioms (just like we need ethical axioms, because ought cannot be derived from is).
some people say that “winning is about not playing dominated strategies”
I do not believe this statement. As in, I do not currently know of a single person, associated either with LW or with decision-theory academia, that says “not playing dominated strategies is entirely action-guiding.” So, as Raemon pointed out, “this post seems like it’s arguing with someone but I’m not sure who.”
In general, I tend to mildly disapprove of words like “a widely-used strategy”, “we often encounter claims” etc, without any direct citations to the individuals who are purportedly making these mistakes. If it really was that widely-used, surely it would be trivial for the authors to quote a few examples off the top of their head, no? What does it say about them that they didn’t?
mildly disapprove of words like “a widely-used strategy”
The text says “A widely-used strategy for arguing for norms of rationality involves avoiding dominated strategies”, which is true* and something we thought would be familiar to everyone who is interested in these topics. For example, see the discussion of Dutch book arguments in the SEP entry on Bayesianism and all of the LessWrong discussion on money pump/dominance/sure loss arguments (e.g., see all of the references in and comments on this post). But fair enough, it would have been better to include citations.
“we often encounter claims”
We did include (potential) examples in this case. Also, similarly to the above, I would think that encountering claims like “we ought to use some heuristic because it has worked well in the past” is commonplace among readers so didn’t see to provide extensive evidence.
*Granted, we are using “dominated strategy” in the wide sense of “strategy that you are certain is worse than something else”, which glosses over technical points like the distinction between dominated strategy and sure loss.
I think I care a bunch about the subject matter of this post, but something about the way this post is written leaves me feeling confused and ungrounded.
Before reading this post, my background beliefs were:
Rationality doesn’t (quite) equal Systemized Winning. Or, rather, that focusing on this seems to lead people astray more than helps them.
There’s probably some laws of cognition to be discovered, about what sort of cognition will have various good properties, in idealized situations.
There’s probably some messier laws of cognition that apply to humans (but those laws are maybe more complicated).
Neither sets of laws necessarily have a simple unifying framework that accomplishes All the Things (although I think the search for simplicity/elegance/all-inclusiveness is probably a productive search, i.e. it tends to yield good stuff along the way. “More elegance” is usually achievable on the margin.
There might be heuristics that work moderately well for humans much of the time, which approximate those laws.
there are probably Very Rough heuristics you can tell an average person without lots of dependencies, and somewhat better heuristics you can give to people who are willing to learn lots of subskills.
Given all that… is there anything in-particular I am meant to take from this post? (I have right now only skimmed it, it felt effortful to comb for the novel bits). I can’t tell whether the few concrete bits are particularly important, or just illustrative examples.
The key claim is: You can’t evaluate which beliefs and decision theory to endorse just by asking “which ones perform the best?” Because the whole question is what it means to systematically perform better, under uncertainty. Every operationalization of “systematically performing better” we’re aware of is either:
Incomplete — like “avoiding dominated strategies”, which leaves a lot unconstrained;
A poorly motivated proxy for the performance we actually care about — like “doing what’s worked in the past”; or
Secretly smuggling in nontrivial non-pragmatic assumptions — like “doing what’s worked in the past, not because that’s what we actually care about, but because past performance predicts future performance”
This is what we meant to convey with this sentence: “On any way of making sense of those words, we end up either calling a very wide range of beliefs and decisions “rational”, or reifying an objective that has nothing to do with our terminal goals without some substantive assumptions.”
(I can’t tell from your comment if you agree with all of that. But, if this was all obvious to you, great! But we’ve often had discussions where someone appealed to “which ones perform the best?” in a way that misses these points.)
My understanding from discussions with the authors (but please correct me):
This post is less about pragmatically analyzing which particular heuristics work best for ideal or non-ideal agents in common environments (assuming a background conception of normativity), and more about the philosophical underpinnings of normativity itself.
Maybe it’s easiest if I explain what this post grows out of:
There seems to be a widespread vibe amongst rationalists that “one-boxing in Newcomb is objectively better, because you simply obtain more money, that is, you simply win”. This vibe is no coincidence, since Eliezer and Nate, in some of their writing about FDT, use language strongly implying that decision theory A is objectively better than decision theory B because it just wins more. Unfortunately, this intuitive notion of winning cannot actually be made into a philosophically valid objective metric. (In more detail, a precise definition of winning is already decision-theory-complete, so these arguments beg the question.) This point is well-known in philosophical academia, and was already succinctly explained in a post by Caspar (which the authors mention).
In the current post, the authors extend a similar philosophical critique to other widespread uses of winning, or background assumptions about rationality. For example, some people say that “winning is about not playing dominated strategies”… and the authors agree about avoiding dominated strategies, but point out that this is not too action-guiding, because it is consistent with many policies. Or also, some people say that “rationality is about implementing the heuristics that have worked well in the past, and/or you think will lead to good future performance”… but these utterances hide other philosophical assumptions, like assuming the same mechanisms are at play in the past and future, which are especially tenuous for big problems like x-risk. Thus, vague references to winning aren’t enough to completely pin down and justify behavior. Instead, we fundamentally need additional constraints or principles about normativity, what the authors call non-pragmatic principles. Of course, these principles cannot themselves be justified in terms of past performance (which would lead to circularity), so they instead need to be taken as normative axioms (just like we need ethical axioms, because ought cannot be derived from is).
Thanks, this gave me the context I needed.
I do not believe this statement. As in, I do not currently know of a single person, associated either with LW or with decision-theory academia, that says “not playing dominated strategies is entirely action-guiding.” So, as Raemon pointed out, “this post seems like it’s arguing with someone but I’m not sure who.”
In general, I tend to mildly disapprove of words like “a widely-used strategy”, “we often encounter claims” etc, without any direct citations to the individuals who are purportedly making these mistakes. If it really was that widely-used, surely it would be trivial for the authors to quote a few examples off the top of their head, no? What does it say about them that they didn’t?
The text says “A widely-used strategy for arguing for norms of rationality involves avoiding dominated strategies”, which is true* and something we thought would be familiar to everyone who is interested in these topics. For example, see the discussion of Dutch book arguments in the SEP entry on Bayesianism and all of the LessWrong discussion on money pump/dominance/sure loss arguments (e.g., see all of the references in and comments on this post). But fair enough, it would have been better to include citations.
We did include (potential) examples in this case. Also, similarly to the above, I would think that encountering claims like “we ought to use some heuristic because it has worked well in the past” is commonplace among readers so didn’t see to provide extensive evidence.
*Granted, we are using “dominated strategy” in the wide sense of “strategy that you are certain is worse than something else”, which glosses over technical points like the distinction between dominated strategy and sure loss.
Put another way: this post seems like it’s arguing with someone but I’m not sure who.