Day 4 of forced writing with an accountability partner!
The Importance (and Potential Failure) of “Pragmatism”[1] in Definitional Debates
In various settings, whether it’s competitive debate, the philosophy of leadership class I took in undergrad, or the ACX philosophy of science meet-up I just attended, it’s common for people to engage in definitional debates. For example, what is “science?” What is “leadership?” These questions touch on some nerves with people who want to defend or challenge the general concept in question, and it drives people towards debating about “the right” definitions—even if they don’t always say it that way. In competitive debate, debaters will sometimes explicitly say that their definition is the “right” definition, while in other cases they may say their definition is “better” with a clear implication that they mean “more correct” (e.g., “our dictionary/source is better than yours”).
My initial (hot?) takes here are twofold:
First, when you find yourself in a muddy definitional debate (and you actually want to make progress), stop running on autopilot where you debate about whose definitions are “correct,” and focus instead on asking the pragmatic question: which definition is more helpful for answering specific questions, solving specific problems, or generally facilitating better discussion? Instead of getting stuck on abstract definitions, it’s important to tailor the definition to the purpose of the discussion. For example, if you’re trying to run a study on the effects of individual “leadership” on business productivity, you should make sure anyone reading the study knows how you operationalized that variable (and make a clear warning to not misinterpret it). Similarly, if you’re judging a competitive debate, I’ve written about the importance of “debate theory[2] which makes debate more net beneficial,” rather than blindly following norms or rules even in the face of loopholes or nonsense. In short, figure out what you’re actually optimizing for and optimize for that, with the recognition that it may not be some abstract (and perhaps purely nonexistent) notion of “correctness.” (To add an addendum, I would emphasize that regardless of whether this seems obvious to people when actually written down, in practice it just isn’t obvious to people in so many discussions I’ve been in; autopilot is subtle and powerful.)
Second, sometimes the first point is misleading and you should reject it and run on autopilot when it comes to definitions. As much as I liked Pragmatism [read: Consequentialism?] as a unifying, bedrock theory of competitive debate, I acknowledged that even Pragmatism could theoretically say “don’t always think in terms of Pragmatism” and instead advocate defaulting to principles like “follow the rules unless there is abundantly clear reason not to.” Maybe there is no perfect definition of things like “elephant,” but the definitions that exist are good enough for most conversations that you shouldn’t interrupt discussions and break out the Pragmatism argument to defend someone who starts saying that warthogs are elephants. So-called “Utilitarian calculus” even in its mild forms can easily be outperformed by rules of thumb and heuristics; humans are imperfect (e.g., we aren’t perfectly unitary in our own interests) and might be subject to self-deception/bias; all computational systems face constraints on data collection and computation (along with communication bandwidth and other capacity for enacting plans). To oversimplify and make nods to Kahneman’s System 1 vs. System 2 concept, I posit that humans can engage in cluster-y “modes of thought,” and it’s hard to actually optimize in the spaces between those modes of thought. Thus, it’s sometimes better to just default to regular conversational autopilot regarding abstract “correctness” of definitions when the “rightness factor” in a given context is something like 0.998 (unless you are trying to focus on the .002 exception case).
I don’t have the time or brainpower to go in greater detail on the synthesis of these two points, but I think they ought to be highlighted.
[Update, 3/29/23: I meant to clarify that I realize “Pragmatism” is an actual label that some people use to refer to a philosophical school of thought, but I’m not using it in that way here.]
I use the term “debate theory” in a broad sense that includes questions like “how to decide which definitions are better.” More generally, I would probably describe it as “meta-level arguments about how people—especially judges—should evaluate something in debate, such as whether some type of argument is ‘legitimate.’
I try to ask myself whether the tenor of what I’m saying overshadows definitional specificity, and how I can provide a better mood or angle. If my argument is not atonal—if my points line up coherently, such that a willing ear will hear, definitionalist debates should slide on by.
As a descriptivist, rather than a prescriptivist, it really sucks to have to fall back on Socratic methods of pre-establishing definitions, except in highly-technical locations.
Thus, I prefer to avoid arguments which hinge on definitions altogether. This doesn’t preclude examples-based arguments, where for example, various interlocutors are operating off different definitions of the same terms but have different examples.
For example, take the term tai.
For some, tai means not when ai is agentic, but when ai can transform the economy in some large or measurable way. For others, it is when the first agentic ai deployed at scale occurs. Yet still, others have differing definitions! Definitions which wildly transform predictions and change alignment discussions. Despite using the term with each other in different ways- with separate definitions- interlocutors often do not notice (or perhaps are subconsciously able to resolve the discrepancy?)!
TAI seems like a partially good example for illustrating my point: I agree that it’s crucial that people have the same thing in mind when debating about TAI in a discussion, but I also think it’s important to recognize that the goal of the discussion is (probably!) not “how should everyone everywhere define TAI” and instead is probably something like “when will we first see ‘TAI.’” In that case, you should just choose whichever definition of TAI makes for a good, productive discussion, rather than trying to forcefully hammer out “the definition” of TAI.
I say partially good, however, because thankfully the term TAI has not taken such historically established root in people’s minds and in dictionaries, so I think (hope!) most people accept there is not “a (single) definition.”
Words like “science,” “leadership,” “Middle East,” and “ethics,” however… not the same story 😩🤖
Day 4 of forced writing with an accountability partner!
The Importance (and Potential Failure) of “Pragmatism”[1] in Definitional Debates
In various settings, whether it’s competitive debate, the philosophy of leadership class I took in undergrad, or the ACX philosophy of science meet-up I just attended, it’s common for people to engage in definitional debates. For example, what is “science?” What is “leadership?” These questions touch on some nerves with people who want to defend or challenge the general concept in question, and it drives people towards debating about “the right” definitions—even if they don’t always say it that way. In competitive debate, debaters will sometimes explicitly say that their definition is the “right” definition, while in other cases they may say their definition is “better” with a clear implication that they mean “more correct” (e.g., “our dictionary/source is better than yours”).
My initial (hot?) takes here are twofold:
First, when you find yourself in a muddy definitional debate (and you actually want to make progress), stop running on autopilot where you debate about whose definitions are “correct,” and focus instead on asking the pragmatic question: which definition is more helpful for answering specific questions, solving specific problems, or generally facilitating better discussion? Instead of getting stuck on abstract definitions, it’s important to tailor the definition to the purpose of the discussion. For example, if you’re trying to run a study on the effects of individual “leadership” on business productivity, you should make sure anyone reading the study knows how you operationalized that variable (and make a clear warning to not misinterpret it). Similarly, if you’re judging a competitive debate, I’ve written about the importance of “debate theory[2] which makes debate more net beneficial,” rather than blindly following norms or rules even in the face of loopholes or nonsense. In short, figure out what you’re actually optimizing for and optimize for that, with the recognition that it may not be some abstract (and perhaps purely nonexistent) notion of “correctness.” (To add an addendum, I would emphasize that regardless of whether this seems obvious to people when actually written down, in practice it just isn’t obvious to people in so many discussions I’ve been in; autopilot is subtle and powerful.)
Second, sometimes the first point is misleading and you should reject it and run on autopilot when it comes to definitions. As much as I liked Pragmatism [read: Consequentialism?] as a unifying, bedrock theory of competitive debate, I acknowledged that even Pragmatism could theoretically say “don’t always think in terms of Pragmatism” and instead advocate defaulting to principles like “follow the rules unless there is abundantly clear reason not to.” Maybe there is no perfect definition of things like “elephant,” but the definitions that exist are good enough for most conversations that you shouldn’t interrupt discussions and break out the Pragmatism argument to defend someone who starts saying that warthogs are elephants. So-called “Utilitarian calculus” even in its mild forms can easily be outperformed by rules of thumb and heuristics; humans are imperfect (e.g., we aren’t perfectly unitary in our own interests) and might be subject to self-deception/bias; all computational systems face constraints on data collection and computation (along with communication bandwidth and other capacity for enacting plans). To oversimplify and make nods to Kahneman’s System 1 vs. System 2 concept, I posit that humans can engage in cluster-y “modes of thought,” and it’s hard to actually optimize in the spaces between those modes of thought. Thus, it’s sometimes better to just default to regular conversational autopilot regarding abstract “correctness” of definitions when the “rightness factor” in a given context is something like 0.998 (unless you are trying to focus on the .002 exception case).
I don’t have the time or brainpower to go in greater detail on the synthesis of these two points, but I think they ought to be highlighted.
[Update, 3/29/23: I meant to clarify that I realize “Pragmatism” is an actual label that some people use to refer to a philosophical school of thought, but I’m not using it in that way here.]
I use the term “debate theory” in a broad sense that includes questions like “how to decide which definitions are better.” More generally, I would probably describe it as “meta-level arguments about how people—especially judges—should evaluate something in debate, such as whether some type of argument is ‘legitimate.’
I try to ask myself whether the tenor of what I’m saying overshadows definitional specificity, and how I can provide a better mood or angle. If my argument is not atonal—if my points line up coherently, such that a willing ear will hear, definitionalist debates should slide on by.
As a descriptivist, rather than a prescriptivist, it really sucks to have to fall back on Socratic methods of pre-establishing definitions, except in highly-technical locations.
Thus, I prefer to avoid arguments which hinge on definitions altogether. This doesn’t preclude examples-based arguments, where for example, various interlocutors are operating off different definitions of the same terms but have different examples.
For example, take the term tai.
For some, tai means not when ai is agentic, but when ai can transform the economy in some large or measurable way. For others, it is when the first agentic ai deployed at scale occurs. Yet still, others have differing definitions! Definitions which wildly transform predictions and change alignment discussions. Despite using the term with each other in different ways- with separate definitions- interlocutors often do not notice (or perhaps are subconsciously able to resolve the discrepancy?)!
TAI seems like a partially good example for illustrating my point: I agree that it’s crucial that people have the same thing in mind when debating about TAI in a discussion, but I also think it’s important to recognize that the goal of the discussion is (probably!) not “how should everyone everywhere define TAI” and instead is probably something like “when will we first see ‘TAI.’” In that case, you should just choose whichever definition of TAI makes for a good, productive discussion, rather than trying to forcefully hammer out “the definition” of TAI.
I say partially good, however, because thankfully the term TAI has not taken such historically established root in people’s minds and in dictionaries, so I think (hope!) most people accept there is not “a (single) definition.”
Words like “science,” “leadership,” “Middle East,” and “ethics,” however… not the same story 😩🤖