It seems relevant here that Zack pretty much agreed with my description: see his comments using terms like “deniable allegory”, “get away with it”, etc.
So, from my perspective, I’m facing a pretty difficult writing problem here! (See my reply to Dagon.) I agree that we don’t want Less Wrong to be a politicized space. On the other hand, I also think that a lot of self-identified rationalists are making a politically-motivated epistemology error in asserting category boundaries to be somewhat arbitrary, and it’s kind of difficult to address what I claim is the error without even so much as alluding to the object-level situation that I think is motivating the error! For the long, object-level discussion, see my reply to Scott Alexander, “The Categories Were Made for Man To Make Predictions”. (Sorry if the byline mismatch causes confusion; I’m using a pen name for that blog.) I didn’t want to share ”… To Make Predictions” on Less Wrong (er, at least not as a top-level post), because that clearly would be too political. But I thought the “Blegg Mode” parable was sufficiently sanitized such that it would be OK to share as a link post here?
I confess that I didn’t put a lot of thought into the description text which you thought was disingenuous. I don’t think I was being consciously disingenuous (bad intent is a disposition, not a feeling!), but after you pointed it out, I do see your point that, since there is some unavoidable political context here, it’s probably better to explicitly label that, because readers who had a prior expectation that no such context would exist would feel misled upon discovering it. So I added the “Content notice” to the description. Hopefully that addresses the concern?
Trans-ness is not always “cheap to detect”. I guess it’s cheaper to detect than, say, sex chromosomes. OK—and how often are another person’s sex chromosomes “decision-relevant with respect to the agent’s goals”?
You seem to be making some assumptions about which parts of the parable are getting mapped to which parts of the real-world issue that obviously inspired the parable. I don’t think this is the correct venue for me to discuss the real-world issue. On this website, under this byline, I’d rather only talk about bleggs and rubes—even if you were correct to point out that it would be disingenuous for someone to expect readers to pretend not to notice the real-world reason that we’re talking about bleggs and rubes. With this in mind, I’ll respond below to a modified version of part of your comment (with edits bracketed).
I guess it’s cheaper to detect than, say, [palladium or vanadium content]. OK—and how often [is a sortable object’s metal content] “decision-relevant with respect to the agent’s goals”? Pretty much only if [you work in the sorting factory.] [That’s] fairly uncommon—for most of us, very few of the [sortable objects] we interact with [need to be sorted into bins according to metal content].
Sure! But reality is very high-dimensional—bleggs and rubes have other properties besides color, shape, and metal content—for example, the properties of being flexible-vs.-hard or luminescent-vs.-non-luminescent, as well as many others that didn’t make it into the parable. If you care about making accurate predictions about the many properties of sortable objects that you can’t immediately observe, then how you draw your category boundaries matters, because your brain is going to be using the category membership you assigned in order to derive your prior expectations about the variables that you haven’t yet observed.
sex chromosomes, which is exactly the “expensive” feature the author identifies in the case of trans people.
Trying to think of some examples, it seems to me that what matters is simply the presence of features that are “decision-relevant with respect to the agent’s goals”. [...]
Thanks for this substantive, on-topic criticism! I would want to think some more before deciding how to reply to this.
ADDENDUM: I thought some more and wrote a sister comment.
Yes, I agree that the content-note deals with my “disingenuousness” objection.
I agree (of course!) that there is structure in the world and that categories are not completely arbitrary. It seems to me that this is perfectly compatible with saying that they are _somewhat_ arbitrary, which conveniently is what I did actually say. Some categorizations are better than others, but there are often multiple roughly-equally-good categorizations and picking one of those rather than another is not an epistemological error. There is something in reality that is perfectly precise and leaves no room for human whims, but that thing is not usually (perhaps not ever) a specific categorization.
So, anyway, in the particular case of transness, I agree that it’s possible that some of the four categorizations we’ve considered here (yours, which makes trans people a separate category but nudge-nudge-wink-wink indicates that for most purposes trans people are much more “like” others of their ‘originally assigned’ gender than others of their ‘adopted’ gender; and the three others I mentioned: getting by with just two categories and not putting trans people in either of them; getting by with just two categories and putting trans people in their ‘originally assigned’ category; getting by with just two categories and putting trans people in their ‘adopted’ category) are so much better than others that we should reject them. But it seems to me that that the relative merits of these depend on the agent’s goals, and the best categorization to adopt may be quite different depending on whether you’re (e.g.) a medical researcher, a person suffering gender dysphoria, a random member of the general public, etc—and also on your own values and priorities.
I did indeed make some assumptions about what was meant to map to what. It’s possible that I didn’t get them quite right. I decline to agree with your proposal that if something metaphorical that you wrote doesn’t seem to match up well I should simply pretend that you intended it as a metaphor, though of course it’s entirely possible that some different match-up makes it work much better.
But it seems to me that that the relative merits of these depend on the agent’s goals, and the best categorization to adopt may be quite different depending on whether you’re [...] and also on your own values and priorities.
Yes, I agree! (And furthermore, the same person might use different categorizations at different times depending on what particular aspects of reality are most relevant to the task at hand.)
But given an agent’s goals in a particular situation, I think it would be a shocking coincidence for it to be the case that “there are [...] multiple roughly-equally-good categorizations.” Why would that happen often?
If I want to use sortable objects as modern art sculptures to decorate my living room, then the relevant features are shape and color, and I want to think about rubes and bleggs (and count adapted bleggs as bleggs). If I also care about how the room looks in the dark and adapted bleggs don’t glow in the dark like ordinary bleggs do, then I want to think about adapted bleggs as being different from ordinary bleggs.
If I’m running a factory that harvests sortable objects for their metal content and my sorting scanner is expensive to run, then I want to think about rubes and ordinary bleggs (because I can infer metal content with acceptably high probability by observing the shape and color of these objects), but I want to look out for adapted bleggs (because their metal content is, with high probability, not what I would expect based on the color/shape/metal-content generalizations I learned from my observations of rubes and ordinary bleggs). If the factory invests in a new state-of-the-art sorting scanner that can be cheaply run on every object, then I don’t have any reason to care about shape or color anymore—I just care about palladium-cored objects and vanadium-cored objects.
and picking one of those rather than another is not an epistemological error.
If you’re really somehow in a situation where there are multiple roughly-equally-good categorizations with respect to your goals and the information you have, then I agree that picking one of those rather than another isn’t an epistemological error. Google Maps and MapQuest are not exactly the same map, but if you just want to drive somewhere, they both reflect the territory pretty well: it probably doesn’t matter which one you use. Faced with an arbitrary choice, you should make an arbitrary choice: flip a coin, or call random.random().
And yet somehow, I never run into people who say, “Categories are somewhat arbitrary, therefore you might as well roll a d3 to decide whether to say ‘trans women are women’ or ‘so-called “trans women” are men’ or ‘transwomen are transwomen’, because each of these maps is doing a roughly-equally-good job of reflecting the relevant aspects of the territory.” But I run into lots of people who say, “Categories are somewhat arbitrary, therefore I’m not wrong to insist that trans women are women,” and who somehow never seem to find it useful to bring up the idea that categories are somewhat arbitrary in seemingly any other context.
You see the problem? If the one has some sort of specific argument for why I should use a particular categorization system in a particular situation, then that’s great, and I want to hear it! But it has to be an argument and not a selectively-invoked appeal-to-arbitrariness conversation-halter.
Multiple roughly-equally-good categorizations might not often happen to an idealized superintelligent AI that’s much better than we are at extracting all possible information from its environment. But we humans are slow and stupid and make mistakes, and accordingly our probability distributions are really wide, which means our error bars are large and we often find ourselves with multiple hypotheses we can’t decide between with confidence.
(Consider, for a rather different example, political questions of the form “how much of X should the government do?” where X is providing a social “safety net”, regulating businesses, or whatever. Obviously these are somewhat value-laden questions, but even if I hold that constant by e.g. just trying to decide what I think is optimal policy I find myself quite uncertain.)
Perhaps more to the point, most of us are in different situations at different times. If what matters to you about rubleggs is sometimes palladium content, sometimes vanadium content, and sometimes furriness, then I think you have to choose between (1) maintaining a bunch of different categorizations and switching between them, (2) maintaining a single categorization that’s much finer grained than is usually needed in any single situation and aggregating categories in different ways at different times, and (3) finding an approach that doesn’t rely so much on putting things into categories. The cognitive-efficiency benefits of categorization are much diminished in this situation.
Your penultimate paragraph argues (I think) that talk of categories’ somewhat-arbitrariness (like, say, Scott’s in TCWMFM) is not sincere and is adopted merely as an excuse for taking a particular view of trans people (perhaps because that’s socially convenient, or feels nice, or something). Well, I guess that’s just the mirror image of what I said about your comments on categories, so turnabout is fair play, but I don’t think I can agree with it.
The “Disguised Queries” post that first introduced bleggs and rubes makes essentially the point that categories are somewhat arbitrary, that there’s no One True Right Answer to “is it a blegg or a rube?”, and that which answer is best depends on what particular things you care about on a particular occasion.
Scott’s “Diseased thinking” (last time I heard, the most highly upvoted article in the history of Less Wrong) makes essentially the same point in connection to the category of “disease”. (The leading example being obesity rather than, say, gender dysphoria.)
Scott’s “The tails coming apart as a metaphor for life” does much the same for categories like “good thing” and “bad thing”.
Here’s a little thing from the Instute for Fiscal Studies about poverty metrics, which begins by observing that there are many possible ways to define poverty and nothing resembling consensus about which is best. (The categories here are “poor” and “not poor”.)
More generally, “well, it all depends what you mean by X” has been a standard move among philosophers for many decades, and it’s basically the same thing: words correspond to categories, categories are somewhat arbitrary, and questions about whether a P is or isn’t a Q are often best understood as questions about how to draw the boundaries of Q, which in turn may be best understood as questions about values or priorities or what have you rather than about the actual content of the actual world.
So it seems to me very not-true that the idea that categories are somewhat arbitrary is a thing invoked only in order to avoid having to take a definite position (or, in order to avoid choosing one’s definite position on the basis of hard facts rather than touchy-feely sensitivity) on how to think and talk about trans people.
The “Disguised Queries” post that first introduced bleggs and rubes makes essentially the point that categories are somewhat arbitrary, that there’s no One True Right Answer to “is it a blegg or a rube?”, and that which answer is best depends on what particular things you care about on a particular occasion.
That’s not how I would summarize that post at all! I mean, I agree that the post did literally say that (“The question ‘Is this object a blegg?’ may stand in for different queries on different occasions”). But it also went on to say more things that I think substantially change the moral—
If [the question] weren’t standing in for some query, you’d have no reason to care.
[...] People who argue that atheism is a religion “because it states beliefs about God” are really trying to argue (I think) that the reasoning methods used in atheism are on a par with the reasoning methods used in religion, or that atheism is no safer than religion in terms of the probability of causally engendering violence, etc… [...]
[...] The a priori irrational part is where, in the course of the argument, someone pulls out a dictionary and looks up the definition of “atheism” or “religion”. [...] How could a dictionary possibly decide whether an empirical cluster of atheists is really substantially different from an empirical cluster of theologians? How can reality vary with the meaning of a word? The points in thingspace don’t move around when we redraw a boundary. [bolding mine—ZMD]
But people often don’t realize that their argument about where to draw a definitional boundary, is really a dispute over whether to infer a characteristic shared by most things inside an empirical cluster...
I claim that what Yudkowsky said about the irrationality about appealing to the dictionary, goes the same for appeal to personal values or priorities. It’s not false exactly, but it doesn’t accomplish anything.
Suppose Bob says, “Abortion is murder, because it’s the killing of a human being!”
Alice says, “No, abortion isn’t murder, because murder is the killing of a sentient being, and fetuses aren’t sentient.”
As Alice and Bob’s hired rationalist mediator, you could say, “You two just have different preferences about somewhat-arbitary category boundaries, that’s all! Abortion is murder-with-respect-to-Bob’s-definition, but it isn’t murder-with-respect-to-Alice’s-definition. Done! End of conversation!”
If different political factions are engaged in conflict over how to define the extension of some common word—common words being a scarce and valuable resource both culturally and information-theoretically—rationalists may not be able to say that one side is simply right and the other is simply wrong, but we can at least strive for objectivity in describing the conflict. Before shrugging and saying, “Well, this is a difference in values; nothing more to be said about it,” we can talk about the detailed consequences of what is gained or lost by paying attention to some differences and ignoring others.
I wasn’t claiming to summarize “Disguised Queries”. I was pointing out one thing that it says, which happens to be the thing that you say no one says other than to push a particular position on trans issues, and which “Disguised Queries” says with (so far as I can tell) no attempt to say anything about transness at all.
Alice and Bob’s conversation doesn’t have to end once they (hopefully) recognize that their disagreement is about category boundaries as much as it is about matters of fact. They may well want to figure out why they draw their boundaries in different places. It might be because they have different purposes; or because they have different opinions on some other matter of fact; or because one or both are really making appeals to emotion for an already-decided conclusion rather than actually trying to think clearly about what sort of a thing a foetus is; etc.
Ending a conversation, or a train of thought, prematurely, is a bad thing. It seems altogether unfair to complain at me merely for using words that could be abused for that purpose. (If you see me actually trying to end a conversation with them, of course, then by all means complain away.)
Over and over again in this discussion, it seems as if I’m being taken to say things I’m fairly sure I haven’t said and certainly don’t believe. If it’s because I’m communicating badly, then I’m very sorry. But it might be worth considering other explanations.
I wasn’t claiming to summarize “Disguised Queries”.
I may have misinterpreted what you meant by the phrase “makes essentially the point that.”
the thing that you say no one says other than to push a particular position on trans issues
I see. I think I made a mistake in the great-great-grandparent comment. That comments’ penultimate paragraph ended: “[...] and who somehow never seem to find it useful to bring up the idea that categories are somewhat arbitrary in seemingly any other context.” I should not have written that, because as you pointed out in the great-grandparent, it’s not true. This turned out to be a pretty costly mistake on my part, because we’ve now just spent the better part of four comments litigating the consequences of this error in a way that we could have avoided if only I had taken more care to phrase the point I was trying to make less hyperbolically.
The point I was trying to make in the offending paragraph is that if someone honestly believes that the choice between multiple category systems is arbitrary or somewhat-arbitrary, then they should accept the choice being made arbitrarily or somewhat-arbitrarily. I agree that “It depends on what you mean by X” is often a useful motion, but I think it’s possible to distinguish when it’s being used to facilitate communication from when it’s being used to impose frame control. Specifically: it’s incoherent to say, “It’s arbitrary, so you should do it my way,” because if it were really arbitrary, the one would not be motivated to say “you should do it my way.” In discussions about my idiosyncratic special interest, I very frequently encounter incredibly mendacious frame-control attempts from people who call themselves “rationalists” and who don’t seem to do this on most other topics. (This is, of course, with respect to how I draw the “incredibly mendacious” category boundary.)
Speaking of ending conversations, I’m feeling pretty emotionally exhausted, and we seem to be spending a lot of wordcount on mutual misunderstandings, so unless you have more things you want to explain to me, maybe this should be the end of the thread? Thanks for the invigorating discussion! This was way more productive than most of the conversations I’ve had lately! (Which maybe tells you something about the quality of those other discussions.)
Happy to leave it here; I have a few final comments that are mostly just making explicit things that I think we largely agree on. (But if any of them annoy you, feel free to have the last word.)
1. Yeah, sorry, “essentially” may have been a bad choice of word. I meant “makes (inter alia) a point which is essentially that …” rather than “makes, as its most essential part, the point that …”.
2. My apologies for taking you more literally than intended. I agree that “it’s arbitrary so you should do it my way” is nuts. On the other hand, “there’s an element of choice here, and I’m choosing X because of Y” seems (at least potentially) OK to me. I don’t know what specific incredibly mendacious things you have in mind, but e.g. nothing in Scott’s TCWMFM strikes me as mendacious and I remain unconvinced by your criticisms of it. (Not, I am fairly sure, because I simply don’t understand them.)
Finally, my apologies for any part of the emotional exhaustion that’s the result of things I said that could have been better if I’d been cleverer or more sensitive or something of the kind.
Meta: That comment had a bunch of bullet points in it when I wrote it. Now (at least for me, at least at the moment) they seem to have disappeared. Weird. [EDIT to clarify:] I mean that the bullet symbols themselves, and the indentation that usually goes with them, have gone. The actual words are still there.
My comment above is unchanged, which I guess means it was a parsing rather than a rendering problem if the bug is now fixed.
Do bullet lists work now?
If they do, this and the previous line should be bulleted.
… Nope, still broken, sorry. But it looks as if the vertical spacing is different from what it would be if these were all ordinary paragraphs, so something is being done. In the HTML they are showing up as <li> elements, without any surrounding <ul> or anything of the sort; I don’t know whether that’s what’s intended.
Right. I’m using Firefox and see no bullets. We’re in “Chrome is the new IE6” territory, I fear; no one bothers testing things on Firefox any more. Alas!
I have a PR that fixes it properly. Should be up by Monday.
I usually check browser compatibility, I just didn’t consider it in this case since I didn’t actually expect that something as old as bullet lists would still have browser rendering differences.
Categories are never arbitrary. They are created to serve purposes. They can serve those purposes better or worse. There can be multiple purposes, leading to multiple categories overlapping and intersecting. Purposes can be lost (imagine a link to the Sequences posting on lost purposes). “Arbitrary” is a “buffer” or “lullaby” word (imagine another link, I might put them in when I’m not writing on a phone on a train) that obscures all that.
It seems to me that you’re saying a bunch of things I already said, and saying them as if they are corrections to errors I’ve made. For instance:
RK: “Categories are never arbitrary.” gjm: “categories are not completely arbitrary.”
RK: “They are created to serve purposes.” gjm: “the relative merits of these depend on the agent’s goals”
RK: “They can serve those purposes better or worse.” gjm: “Some categorizations are better than others [...] the relative merits of these depend on the agent’s goals.”
So, anyway, I agree with what you say, but I’m not sure why you think (if you do—it seems like you do) I was using “arbitrary” as what you call a “lullaby word”. I’m sorry if for you it obscured any of those points about categories, though clearly it hasn’t stopped you noticing them; you may or may not choose to believe me when I said it didn’t stop me noticing them either.
For what it’s worth, I think what I mean when I say “categories are somewhat arbitrary” is almost exactly the same as what you mean when you say “they are created to serve purposes”.
So, from my perspective, I’m facing a pretty difficult writing problem here! (See my reply to Dagon.) I agree that we don’t want Less Wrong to be a politicized space. On the other hand, I also think that a lot of self-identified rationalists are making a politically-motivated epistemology error in asserting category boundaries to be somewhat arbitrary, and it’s kind of difficult to address what I claim is the error without even so much as alluding to the object-level situation that I think is motivating the error! For the long, object-level discussion, see my reply to Scott Alexander, “The Categories Were Made for Man To Make Predictions”. (Sorry if the byline mismatch causes confusion; I’m using a pen name for that blog.) I didn’t want to share ”… To Make Predictions” on Less Wrong (er, at least not as a top-level post), because that clearly would be too political. But I thought the “Blegg Mode” parable was sufficiently sanitized such that it would be OK to share as a link post here?
I confess that I didn’t put a lot of thought into the description text which you thought was disingenuous. I don’t think I was being consciously disingenuous (bad intent is a disposition, not a feeling!), but after you pointed it out, I do see your point that, since there is some unavoidable political context here, it’s probably better to explicitly label that, because readers who had a prior expectation that no such context would exist would feel misled upon discovering it. So I added the “Content notice” to the description. Hopefully that addresses the concern?
No! Categories are not “somewhat arbitrary”! There is structure in the world, and intelligent agents need categories that carve the structure at the joints so that they can make efficient probabilistic inferences about the variables they’re trying to optimize! “Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.” We had a whole Sequence about this! Doesn’t anyone else remember?!
You seem to be making some assumptions about which parts of the parable are getting mapped to which parts of the real-world issue that obviously inspired the parable. I don’t think this is the correct venue for me to discuss the real-world issue. On this website, under this byline, I’d rather only talk about bleggs and rubes—even if you were correct to point out that it would be disingenuous for someone to expect readers to pretend not to notice the real-world reason that we’re talking about bleggs and rubes. With this in mind, I’ll respond below to a modified version of part of your comment (with edits bracketed).
Sure! But reality is very high-dimensional—bleggs and rubes have other properties besides color, shape, and metal content—for example, the properties of being flexible-vs.-hard or luminescent-vs.-non-luminescent, as well as many others that didn’t make it into the parable. If you care about making accurate predictions about the many properties of sortable objects that you can’t immediately observe, then how you draw your category boundaries matters, because your brain is going to be using the category membership you assigned in order to derive your prior expectations about the variables that you haven’t yet observed.
The author did no such thing! It’s epistemology fiction about bleggs and rubes! It’s true that I came up with the parable while I was trying to think carefully about transgender stuff that was of direct and intense personal relevance to me. It’s true that it would be disingenuous for someone to expect readers to not-notice that I was trying to think about trans issues. (I mean, it’s in the URL.) But I didn’t say anything about chromosomes! “If confusion threatens when you interpret a metaphor as a metaphor, try taking everything completely literally.”
Thanks for this substantive, on-topic criticism! I would want to think some more before deciding how to reply to this.
ADDENDUM: I thought some more and wrote a sister comment.
Yes, I agree that the content-note deals with my “disingenuousness” objection.
I agree (of course!) that there is structure in the world and that categories are not completely arbitrary. It seems to me that this is perfectly compatible with saying that they are _somewhat_ arbitrary, which conveniently is what I did actually say. Some categorizations are better than others, but there are often multiple roughly-equally-good categorizations and picking one of those rather than another is not an epistemological error. There is something in reality that is perfectly precise and leaves no room for human whims, but that thing is not usually (perhaps not ever) a specific categorization.
So, anyway, in the particular case of transness, I agree that it’s possible that some of the four categorizations we’ve considered here (yours, which makes trans people a separate category but nudge-nudge-wink-wink indicates that for most purposes trans people are much more “like” others of their ‘originally assigned’ gender than others of their ‘adopted’ gender; and the three others I mentioned: getting by with just two categories and not putting trans people in either of them; getting by with just two categories and putting trans people in their ‘originally assigned’ category; getting by with just two categories and putting trans people in their ‘adopted’ category) are so much better than others that we should reject them. But it seems to me that that the relative merits of these depend on the agent’s goals, and the best categorization to adopt may be quite different depending on whether you’re (e.g.) a medical researcher, a person suffering gender dysphoria, a random member of the general public, etc—and also on your own values and priorities.
I did indeed make some assumptions about what was meant to map to what. It’s possible that I didn’t get them quite right. I decline to agree with your proposal that if something metaphorical that you wrote doesn’t seem to match up well I should simply pretend that you intended it as a metaphor, though of course it’s entirely possible that some different match-up makes it work much better.
Yes, I agree! (And furthermore, the same person might use different categorizations at different times depending on what particular aspects of reality are most relevant to the task at hand.)
But given an agent’s goals in a particular situation, I think it would be a shocking coincidence for it to be the case that “there are [...] multiple roughly-equally-good categorizations.” Why would that happen often?
If I want to use sortable objects as modern art sculptures to decorate my living room, then the relevant features are shape and color, and I want to think about rubes and bleggs (and count adapted bleggs as bleggs). If I also care about how the room looks in the dark and adapted bleggs don’t glow in the dark like ordinary bleggs do, then I want to think about adapted bleggs as being different from ordinary bleggs.
If I’m running a factory that harvests sortable objects for their metal content and my sorting scanner is expensive to run, then I want to think about rubes and ordinary bleggs (because I can infer metal content with acceptably high probability by observing the shape and color of these objects), but I want to look out for adapted bleggs (because their metal content is, with high probability, not what I would expect based on the color/shape/metal-content generalizations I learned from my observations of rubes and ordinary bleggs). If the factory invests in a new state-of-the-art sorting scanner that can be cheaply run on every object, then I don’t have any reason to care about shape or color anymore—I just care about palladium-cored objects and vanadium-cored objects.
If you’re really somehow in a situation where there are multiple roughly-equally-good categorizations with respect to your goals and the information you have, then I agree that picking one of those rather than another isn’t an epistemological error. Google Maps and MapQuest are not exactly the same map, but if you just want to drive somewhere, they both reflect the territory pretty well: it probably doesn’t matter which one you use. Faced with an arbitrary choice, you should make an arbitrary choice: flip a coin, or call
random.random()
.And yet somehow, I never run into people who say, “Categories are somewhat arbitrary, therefore you might as well roll a d3 to decide whether to say ‘trans women are women’ or ‘so-called “trans women” are men’ or ‘transwomen are transwomen’, because each of these maps is doing a roughly-equally-good job of reflecting the relevant aspects of the territory.” But I run into lots of people who say, “Categories are somewhat arbitrary, therefore I’m not wrong to insist that trans women are women,” and who somehow never seem to find it useful to bring up the idea that categories are somewhat arbitrary in seemingly any other context.
You see the problem? If the one has some sort of specific argument for why I should use a particular categorization system in a particular situation, then that’s great, and I want to hear it! But it has to be an argument and not a selectively-invoked appeal-to-arbitrariness conversation-halter.
Multiple roughly-equally-good categorizations might not often happen to an idealized superintelligent AI that’s much better than we are at extracting all possible information from its environment. But we humans are slow and stupid and make mistakes, and accordingly our probability distributions are really wide, which means our error bars are large and we often find ourselves with multiple hypotheses we can’t decide between with confidence.
(Consider, for a rather different example, political questions of the form “how much of X should the government do?” where X is providing a social “safety net”, regulating businesses, or whatever. Obviously these are somewhat value-laden questions, but even if I hold that constant by e.g. just trying to decide what I think is optimal policy I find myself quite uncertain.)
Perhaps more to the point, most of us are in different situations at different times. If what matters to you about rubleggs is sometimes palladium content, sometimes vanadium content, and sometimes furriness, then I think you have to choose between (1) maintaining a bunch of different categorizations and switching between them, (2) maintaining a single categorization that’s much finer grained than is usually needed in any single situation and aggregating categories in different ways at different times, and (3) finding an approach that doesn’t rely so much on putting things into categories. The cognitive-efficiency benefits of categorization are much diminished in this situation.
Your penultimate paragraph argues (I think) that talk of categories’ somewhat-arbitrariness (like, say, Scott’s in TCWMFM) is not sincere and is adopted merely as an excuse for taking a particular view of trans people (perhaps because that’s socially convenient, or feels nice, or something). Well, I guess that’s just the mirror image of what I said about your comments on categories, so turnabout is fair play, but I don’t think I can agree with it.
The “Disguised Queries” post that first introduced bleggs and rubes makes essentially the point that categories are somewhat arbitrary, that there’s no One True Right Answer to “is it a blegg or a rube?”, and that which answer is best depends on what particular things you care about on a particular occasion.
Scott’s “Diseased thinking” (last time I heard, the most highly upvoted article in the history of Less Wrong) makes essentially the same point in connection to the category of “disease”. (The leading example being obesity rather than, say, gender dysphoria.)
Scott’s “The tails coming apart as a metaphor for life” does much the same for categories like “good thing” and “bad thing”.
Here’s a little thing from the Instute for Fiscal Studies about poverty metrics, which begins by observing that there are many possible ways to define poverty and nothing resembling consensus about which is best. (The categories here are “poor” and “not poor”.)
More generally, “well, it all depends what you mean by X” has been a standard move among philosophers for many decades, and it’s basically the same thing: words correspond to categories, categories are somewhat arbitrary, and questions about whether a P is or isn’t a Q are often best understood as questions about how to draw the boundaries of Q, which in turn may be best understood as questions about values or priorities or what have you rather than about the actual content of the actual world.
So it seems to me very not-true that the idea that categories are somewhat arbitrary is a thing invoked only in order to avoid having to take a definite position (or, in order to avoid choosing one’s definite position on the basis of hard facts rather than touchy-feely sensitivity) on how to think and talk about trans people.
That’s not how I would summarize that post at all! I mean, I agree that the post did literally say that (“The question ‘Is this object a blegg?’ may stand in for different queries on different occasions”). But it also went on to say more things that I think substantially change the moral—
I claim that what Yudkowsky said about the irrationality about appealing to the dictionary, goes the same for appeal to personal values or priorities. It’s not false exactly, but it doesn’t accomplish anything.
Suppose Bob says, “Abortion is murder, because it’s the killing of a human being!”
Alice says, “No, abortion isn’t murder, because murder is the killing of a sentient being, and fetuses aren’t sentient.”
As Alice and Bob’s hired rationalist mediator, you could say, “You two just have different preferences about somewhat-arbitary category boundaries, that’s all! Abortion is murder-with-respect-to-Bob’s-definition, but it isn’t murder-with-respect-to-Alice’s-definition. Done! End of conversation!”
And maybe sometimes there really is nothing more to it than that. But oftentimes, I think we can do more work to break the symmetry: to work out what different predictions Alice and Bob are making about reality, or what different preferences they have about reality, and refocus the discussion on that. As I wrote in “The Categories Were Made for Man to Make Predictions”:
We had an entire Sequence specifically about this! You were there! I was there! Why doesn’t anyone remember?!
I wasn’t claiming to summarize “Disguised Queries”. I was pointing out one thing that it says, which happens to be the thing that you say no one says other than to push a particular position on trans issues, and which “Disguised Queries” says with (so far as I can tell) no attempt to say anything about transness at all.
Alice and Bob’s conversation doesn’t have to end once they (hopefully) recognize that their disagreement is about category boundaries as much as it is about matters of fact. They may well want to figure out why they draw their boundaries in different places. It might be because they have different purposes; or because they have different opinions on some other matter of fact; or because one or both are really making appeals to emotion for an already-decided conclusion rather than actually trying to think clearly about what sort of a thing a foetus is; etc.
Ending a conversation, or a train of thought, prematurely, is a bad thing. It seems altogether unfair to complain at me merely for using words that could be abused for that purpose. (If you see me actually trying to end a conversation with them, of course, then by all means complain away.)
Over and over again in this discussion, it seems as if I’m being taken to say things I’m fairly sure I haven’t said and certainly don’t believe. If it’s because I’m communicating badly, then I’m very sorry. But it might be worth considering other explanations.
I may have misinterpreted what you meant by the phrase “makes essentially the point that.”
I see. I think I made a mistake in the great-great-grandparent comment. That comments’ penultimate paragraph ended: “[...] and who somehow never seem to find it useful to bring up the idea that categories are somewhat arbitrary in seemingly any other context.” I should not have written that, because as you pointed out in the great-grandparent, it’s not true. This turned out to be a pretty costly mistake on my part, because we’ve now just spent the better part of four comments litigating the consequences of this error in a way that we could have avoided if only I had taken more care to phrase the point I was trying to make less hyperbolically.
The point I was trying to make in the offending paragraph is that if someone honestly believes that the choice between multiple category systems is arbitrary or somewhat-arbitrary, then they should accept the choice being made arbitrarily or somewhat-arbitrarily. I agree that “It depends on what you mean by X” is often a useful motion, but I think it’s possible to distinguish when it’s being used to facilitate communication from when it’s being used to impose frame control. Specifically: it’s incoherent to say, “It’s arbitrary, so you should do it my way,” because if it were really arbitrary, the one would not be motivated to say “you should do it my way.” In discussions about my idiosyncratic special interest, I very frequently encounter incredibly mendacious frame-control attempts from people who call themselves “rationalists” and who don’t seem to do this on most other topics. (This is, of course, with respect to how I draw the “incredibly mendacious” category boundary.)
Speaking of ending conversations, I’m feeling pretty emotionally exhausted, and we seem to be spending a lot of wordcount on mutual misunderstandings, so unless you have more things you want to explain to me, maybe this should be the end of the thread? Thanks for the invigorating discussion! This was way more productive than most of the conversations I’ve had lately! (Which maybe tells you something about the quality of those other discussions.)
Happy to leave it here; I have a few final comments that are mostly just making explicit things that I think we largely agree on. (But if any of them annoy you, feel free to have the last word.)
1. Yeah, sorry, “essentially” may have been a bad choice of word. I meant “makes (inter alia) a point which is essentially that …” rather than “makes, as its most essential part, the point that …”.
2. My apologies for taking you more literally than intended. I agree that “it’s arbitrary so you should do it my way” is nuts. On the other hand, “there’s an element of choice here, and I’m choosing X because of Y” seems (at least potentially) OK to me. I don’t know what specific incredibly mendacious things you have in mind, but e.g. nothing in Scott’s TCWMFM strikes me as mendacious and I remain unconvinced by your criticisms of it. (Not, I am fairly sure, because I simply don’t understand them.)
Finally, my apologies for any part of the emotional exhaustion that’s the result of things I said that could have been better if I’d been cleverer or more sensitive or something of the kind.
Meta: That comment had a bunch of bullet points in it when I wrote it. Now (at least for me, at least at the moment) they seem to have disappeared. Weird. [EDIT to clarify:] I mean that the bullet symbols themselves, and the indentation that usually goes with them, have gone. The actual words are still there.
Our bad. We broke bullet-lists with a recent update that also added autolinking. I am working on a fix that should ideally go up tonight.
Should be fixed now. Sorry for the inconvenience.
My comment above is unchanged, which I guess means it was a parsing rather than a rendering problem if the bug is now fixed.
Do bullet lists work now?
If they do, this and the previous line should be bulleted.
… Nope, still broken, sorry. But it looks as if the vertical spacing is different from what it would be if these were all ordinary paragraphs, so something is being done. In the HTML they are showing up as <li> elements, without any surrounding <ul> or anything of the sort; I don’t know whether that’s what’s intended.
Wait, that list is definitely bulleted, and I also fixed your comment above. Are we seeing different things?
I don’t see bullets on Firefox 65.0.1, but I do on Chromium 72.0.3626.121 (both Xubuntu 16.04.5).
Right. I’m using Firefox and see no bullets. We’re in “Chrome is the new IE6” territory, I fear; no one bothers testing things on Firefox any more. Alas!
I have a PR that fixes it properly. Should be up by Monday.
I usually check browser compatibility, I just didn’t consider it in this case since I didn’t actually expect that something as old as bullet lists would still have browser rendering differences.
Thanks!
My guess is it’s some browser inconsistency because of orphaned <li> elements. Will try to fix that as well.
Categories are never arbitrary. They are created to serve purposes. They can serve those purposes better or worse. There can be multiple purposes, leading to multiple categories overlapping and intersecting. Purposes can be lost (imagine a link to the Sequences posting on lost purposes). “Arbitrary” is a “buffer” or “lullaby” word (imagine another link, I might put them in when I’m not writing on a phone on a train) that obscures all that.
It seems to me that you’re saying a bunch of things I already said, and saying them as if they are corrections to errors I’ve made. For instance:
RK: “Categories are never arbitrary.” gjm: “categories are not completely arbitrary.”
RK: “They are created to serve purposes.” gjm: “the relative merits of these depend on the agent’s goals”
RK: “They can serve those purposes better or worse.” gjm: “Some categorizations are better than others [...] the relative merits of these depend on the agent’s goals.”
So, anyway, I agree with what you say, but I’m not sure why you think (if you do—it seems like you do) I was using “arbitrary” as what you call a “lullaby word”. I’m sorry if for you it obscured any of those points about categories, though clearly it hasn’t stopped you noticing them; you may or may not choose to believe me when I said it didn’t stop me noticing them either.
For what it’s worth, I think what I mean when I say “categories are somewhat arbitrary” is almost exactly the same as what you mean when you say “they are created to serve purposes”.