I’m Harry Altman. I do strange sorts of math.
Posts I’d recommend:
A summary of Savage’s foundations for probability and utility—if the arguments used to ground probability and utility seem circular to you, here’s a non-circular way of doing it.
I’m Harry Altman. I do strange sorts of math.
Posts I’d recommend:
A summary of Savage’s foundations for probability and utility—if the arguments used to ground probability and utility seem circular to you, here’s a non-circular way of doing it.
I don’t think this follows. I do not see how degree of wrongness implies intent. Eliezer’s comment rhetorically suggests intent (“trolling”) as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.
I would say moreover, that this is the sort of mistake that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.
Is it contrary to everything Eliezer’s ever written? Sure! But reading the entirety of the Sequences, calling yourself a “rationalist”, does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.
I think we can only infer intent like you’re talking about if the person in question is, actually, y’know, thinking about what they’re doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; not autopiloting takes effort, and doing that work may be what a “rationalist” is supposed to do, it’s still not the default. All I think we can infer from this is a failure to do the work to shift out of autopilot and think. Bad group epistemology via laziness rather than via intent strikes me as the more likely explanation.
I want to more or less second what River said. Mostly I wouldn’t have bothered replying to this… but your line of “today around <30” struck me as particularly wrong.
So, first of all, as River already noted, your claim about “in loco parentis” isn’t accurate. People 18 or over are legally adults; yes, there used to be a notion of “in loco parentis” applied to college students, but that hasn’t been current law since about the 60s.
But also, under 30? Like, you’re talking about grad students? That is not my experience at all. Undergrads are still treated as kids to a substantial extent, yes, even if they’re legally adults and there’s no longer any such thing as “in loco parentis”. But in my experience grad students are, absolutely, treated as adults, nor have I heard of things being otherwise. Perhaps this varies by field (I’m in math) or location or something, I don’t know, but I at least have never heard of that before.
I’m not involved with the Bay Area crowd but I remember seeing things about how Leverage is a scam/cult years ago; I was surprised to learn it’s still around...? I expected most everyone would have deserted it after that...
I do worry about “ends justify the means” reasoning when evaluating whether a person or project was or wasn’t “good for the world” or “worth supporting”. This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake—and also believes the project is doing something new/experimental that current civilization is inadequate for—there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of “high-demand groups” (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.
There is (roughly) a sequences post for that. :P
Seems to me the story in the original yak-shaving story falls into case 2 -- the thing to do is to forget about borrowing the EZPass and just pay the toll!
There used to be an Ann Arbor LW meetup group, actually, back when I lived there—it seems to be pretty dead now best I can tell but the mailing list still exists. It’s A4R-A2@googlegroups.com; I don’t know how relevant this is to you, since you’re trying to start a UM group and many of the people on that list will likely not be UM-affiliated, but you can at least try recruiting from there (or just restarting it if you’re not necessarily trying to specifically start a UM group). It also used to have a website, though I can’t find it at the moment, and I doubt it would be that helpful anyway.
According to the meetup group list on this website, there’s also is or was a UM EA group, but there’s not really any information about it? And there’s this SSC meetup group listed there too, which has more recent activity possibly? No idea who’s in that, I don’t know this Sam Rossini, but possibly also worth recruiting from?
So, uh, yeah, that’s my attempt (as someone who hasn’t lived in Ann Arbor for two years) to survey the prior work in this area. :P Someone who’s actually still there could likely say more...
Oh, huh—looks like this paper is the summary of the blog series that “Slime Mold Time Mold” has been written about it? Guess I can read this paper to skip to the end, since not all of it is posted yet. :P
Yeah. You can use language that is unambiguously not attack language, it just takes more effort to avoid common words. In this respect it’s not much unlike how discussing lots of other things seriously requires avoiding common but confused words!
I’m reminded of this paper, which discusses a smaller set of two-player games. What you call “Cake Eating” they call the “Harmony Game”. They also use the more suggestive variable names—which I believe come from existing literature—R (reward), S (sucker’s payoff), T (temptation), P (punishment) instead of (W, X, Y, Z). Note that in addition to R > P (W > Z) they also added the restrictions T > P (Y > Z) and R > S (W > X) so that the two options could be meaningfully labeled “cooperate” and “defect” instead of “Krump” and “Flitz” (the cooperate option is always better for the other player, regardless of whether it’s better or worse for you). (I’m ignoring cases of things being equal, just like you are.)
(Of course, the paper isn’t actually about classifying games, it’s an empirical study of how people actually play these games! But I remember it for being the first place I saw such a classification...)
With these additional restrictions, there are only four games: Harmony Game (Cake Eating), Chicken (Hawk-Dove/Snowdrift/Farmer’s Dilemma), Stag Hunt, and Prisoner’s Dilemma (Too Many Cooks).
I’d basically been using that as my way of thinking about two-player games, but this broader set might be useful. Thanks for taking the time to do this and assign names to these.
I do have to wonder about that result that Zack_M_Davis mentions… as you mentioned, where’s the Harmony Game in it? Also, isn’t Battle of the Sexes more like Chicken than like Stag Hunt? I would expect to see Chicken and Stag Hunt, not Battle of the Sexes and Chicken, which sounds like the same thing twice and seems to leave out Stag Hunt. But maybe Battle of the Sexes is actually equivalent, in the sense described, to Stag Hunt rather than Chicken? That would be surprising, but I didn’t set down to check whether the definition is satsified or not...
I suppose so. It is at least a different problem than I was worried about...
Huh. Given the negative reputation of bioethics around here—one I hadn’t much questioned, TBH—most of these are suprisingly reasonable. Only #10, #16, and #24 really seemed like the LW stereotype of the bioethics paper that I would roll my eyes at. Arguably also #31, but I’d argue that one is instead alarming in a different way.
Some others seemed like bureaucratic junk (so, neither good nor bad), and others I think the quoted sections didn’t really give enough information to judge; it is quite possible that a few more of these would go under the stereotype list if I read these papers further.
#1 is… man, why does it have to be so hostile? The argument it’s making is basically a counter-stereotypical bioethics argument, but it’s written in such a hostile manner. That’s not the way to have a good discussion!
Also, I’m quite amused to see that #3 basically argues that we need what I’ve previously referred to here as a “theory of legitimate influence”, for what appear likely to be similar reasons (although again I didn’t read the full thing to inspect this in more detail).
Consider a modified version of the prisoner’s dilemma. This time, the prisoners are allowed to communicate, but they also have to solve an additional technical problem, say, how to split the loot. They may start with agreeing on not betraying each other to the prosecutors, but later one of them may say: “I’ve done most of the work. I want 70% of the loot, otherwise I am going to rat on you.” It’s easy to see how the problem would escalate and end up in the prisoners betraying each other.
Minor note, but I think you could just talk about a [bargaining game}(https://en.wikipedia.org/wiki/Cooperative_bargaining), rather than the Prisoner’s Dilemma, which appears to be unrelated. There are other basic game theory examples beyond the Prisoner’s Dilemma!
I just explained why (without more specific theories of in exactly what way the gravity would become delocalized from the visible mass) the bullet cluster is not evidence one way or the other.
Now, you compare the extra fields of modified gravity to epicycles—as in, post-hoc complications grafted on to a theory to explain a particular phenomenon. But these extra fields are, to the best of my understanding, not grafted on to explain such delocalization; they’re the actual basic content of the modified gravity theories and necessary to obtain a workable theory at all. MOND by itself, after all, is not a theory of gravity; the problem then is making one compatible with it, and every actual attempt at that that I’m aware of involves these extra fields, again, not as an epicycle for the bullet cluster, but as a way of constructing a workable theory at all. So, I don’t think that comparison is apt here.
One could perhaps say that such theories are epicycles upon MOND—since the timeline may go MOND, then bullet cluster, then proper modified gravity theories—but for the reasons above I don’t think that makes a lot of sense either.
If this was some post-hoc epicycle then your comment would make some sense; but as it is, I don’t think it does. Is there some reason that I’m missing that it should be regarded as a post-hoc epicycle?
Note that Hossenfelder herself says modified gravity is probably not correct! It’s still important to understand what is or is not a valid argument against it. The other arguments for dark matter sure seem pretty compelling!
(Also, uh, I don’t think “People who think X are just closed-minded and clearly not open to persuasion” is generally not the sort of charity we try to go for here on LW...? I didn’t downvote you but, like, accusing people of being closed-minded rather than actually arguing is on the path to becoming similarly close-minded oneself, you know?)
I feel like this really misses the point of the whole “non-central fallacy” idea. I would say, categories are heuristics and those heuristics have limits. When the category gets strained, the thing to do is to stop arguing using the category and start arguing the particular facts without relation to the category (“taboo your words”).
You’re saying that this sort of arguing-via-category is useful because it’s actually aguing-via-similarity; but I see the point of Scott/Yvain’s original article being that such arguing via similarity simply isn’t useful in such cases, and has to be replaced with a direct assessment of the facts.
Like, one might say, similar in what way, and how do we know that this particular similarity is relevant in this case? But any answer to why the similarity is relevant, could be translated into an argument that doesn’t rely on the similarity in the first place. Similarity can thus be a useful guide to finding arguments, but it shouldn’t, in contentious cases, be considered compelling as an argument itself.
Yes, as you say, the argument is common because it is useful as a quick shorthand most of the time. But in contentious cases, in edge cases—the cases that people are likely to be arguing about—it breaks down. That is to say, it’s an argument whose validity is largel to those cases where people aren’t arguing to begin with!
Good post. Makes a good case. I wasn’t aware of the evidence from galactic cluster lensing; that’s pretty impressive. (I guess not as much as the CMB power spectrum, but that I’d heard about before. :P )
But, my understanding is that the Bullet Cluster is actually not the strong evidence it’s claimed to be? My understanding of modified gravity theories is that, since they all work by adding extra fields, it’s also possible for those to have gravity separated from visible matter, even if no dark matter is present. (See e.g.. here… of course in this post Hossenfelder claims that the Bullet Cluster in particular is actually evidence against dark matter due to simulation reasons, but I don’t know how much to believe that.)
Of course this means that modified gravity theories also aren’t quite as different from dark matter as they’re commonly said to be—with either dark matter or modified gravity you’re adding an additional field, the difference is just (OK, this is maybe a big just!) the nature of that field. But since this new field would presumably not act like matter in all the other ways you describe, my understanding is that it is still definitely distinct from “dark matter” for the purposes of this post.
Apparently these days even modified gravity proponents admit you still need dark matter to make things work out, which rather kills the whole motivation behind modified gravity, so I’m not sure if that’s really an idea that makes sense anymore! Still, had to point out the thing about the Bullet Cluster, because based on what I know I don’t think that part is actually correct.
“Cyan” isn’t a basic color term in English; English speakers ordinarily consider cyan to be a variant of blue, not something basically separate. Something that is cyan could also be described in English as “blue”. As opposed to say, red and pink—these are both basic color terms in English; an English speaker would not ordinarily refer to something pink as “red”, or vice versa.
Or in other words: Color words don’t refer to points in color space, they refer to regions, which means that you can look at how those regions overlap—some may be subsets of others, some may be disjoint (well—not disjoint per se, but thought of as disjoint, since obviously you can find things near the boundary that won’t be judged consistently), etc. Having words “blue” and “cyan” that refer to two thought-of-as-disjoint regions is pretty different from having words “blue” and “cyan” where the latter refers to a subset of the former.
So, it’s not as simple as saying “English also has a word cyan”—yes, it does, but the meaning of that word, and the relation of its meaning to that of “blue”, is pretty different. These translated words don’t quite correspond; we’re taking regions in color space, and translating them to words that refer to similar regions, regions that contain a number of the same points, but not the same ones.
The bit in the comic about “Eurocentric paint” obviously doesn’t quite make sense as stated—the division of the rainbow doesn’t come from paint! -- but a paint set that focused on the central examples of basic color terms of a particular language could reasonably be called a that-language-centric paint set. In any case the basic point is just that dividing up color space into basic color terms has a large cultural component to it.
Wow!
I guess a thing that still bugs me after reading the rest of the comments is, if it turns out that this vaccine only offers protection against inhaling the virus though the nose, how much does that help when one considers that one could also inhale it through the mouth? Like, I worry that after taking this I’d still need to avoiding indoor spaces with other people, etc, which would defeat a lot of the benefit of it.
But, if it turns out that it does yield antibodies in the blood, then… this sounds very much worth trying!
So, why do we perceive so many situations to be Prisoner’s Dilemma -like rather than Stag Hunt -like?
I don’t think that we do, exactly. I think that most people only know the term “prisoners’ dilemma” and haven’t learned any more game theory than that; and then occasionally they go and actually attempt to map things onto the Prisoners’ Dilemma as a result. :-/
That sounds like it might have been it?
I mean, there is a word for correlated errors, and that word is “bias”; so you seem to be essentially claiming that people are unbiased? I’m guessing that’s probably not what you’re trying to claim, but that is what I am concluding? Regardless, I’m saying people are biased towards this mistake.
Or really, what I’m saying it’s the same sort of phenomenon that Eliezer discusses here. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the “corrupted hardware” itself. Or something like that—sorry, that’s not a great way of putting it, but I don’t really have a better one, and I hope that conveys what I’m getting at.
Like, I think you’re assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they’re executing, not deliberately, but by default without thinking about it, that requires effort not to execute.
We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad?
I mean people don’t necessarily fully internalize everything they read, and in some people the “hold on what am I doing?” can be weak? <shrug>
I mean I certainly don’t want to rule out deliberate malice like you’re talking about, but neither do I think this one snippet is enough to strongly conclude it.