I’m Harry Altman. I do strange sorts of math.
Posts I’d recommend:
A summary of Savage’s foundations for probability and utility—if the arguments used to ground probability and utility seem circular to you, here’s a non-circular way of doing it.
I’m Harry Altman. I do strange sorts of math.
Posts I’d recommend:
A summary of Savage’s foundations for probability and utility—if the arguments used to ground probability and utility seem circular to you, here’s a non-circular way of doing it.
One annoying thing in reading Chapter 3 -- chapter 3 states that for l=2,4,8, the optimal scoring rules can be written in terms of elementary functions. However, you only actually give the full formula for the case l=8 (for l=2 you give it on half the interval). What are the formulas for the other cases?
(But also, this is really cool, thanks for posting this!)
I think some cases cases of what you’re describing as derivation-time penalties may really be can-you-derive-that-at-all penalties. E.g., with MWI and no Born rule assumed, it doesn’t seem that there is any way to derive it. I would still expect a “correct” interpretation of QM to be essentially MWI-like, but I still think it’s correct to penalize MWI-w/o-Born-assumption, not for the complexity of deriving the Born rule, but for the fact that it doesn’t seem to be possible at all. Similarly with attempts to eliminate time, or its distinction from space, from physics; it seems like it simply shouldn’t be possible in such a case to get something like Lorentz invariance.
Why do babies need so much sleep then?
Given that at the moment we don’t really understand why people need to sleep at all, I don’t think this is a strong argument for any particular claimed function.
Oh, that’s a good citation, thanks. I’ve used that rough argument in the past, knowing I’d copied it from someone, but I had no recollection of what specifically or that it had been made more formal. Now I know!
My comment above was largely just intended as “how come nobody listens when I say it?” grumbling. :P
I should note that this is more or less the same thing that Alex Mennen and I have been pointing out for quite some time, even if the exact framework is a little different. You can’t both have unbounded utilities, and insist that expected utility works for infinite gambles.
IMO the correct thing to abandon is unbounded utilities, but whatever assumption you choose to abandon, the basic argument is an old one due to Fisher, and I’ve discussed it in previous posts! (Even if the framework is a little different here, this seems essentially similar.)
I’m glad to see other people are finally taking the issue seriously, at least...
Yeah, that sounds about right to me. I’m not saying that you should assume such people are harmless or anything! Just that, like, you might want to try giving them a kick first—“hey, constant vigilance, remember?” :P—and see how they respond before giving up and treating them as hostile.
This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.
I mean, there is a word for correlated errors, and that word is “bias”; so you seem to be essentially claiming that people are unbiased? I’m guessing that’s probably not what you’re trying to claim, but that is what I am concluding? Regardless, I’m saying people are biased towards this mistake.
Or really, what I’m saying it’s the same sort of phenomenon that Eliezer discusses here. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the “corrupted hardware” itself. Or something like that—sorry, that’s not a great way of putting it, but I don’t really have a better one, and I hope that conveys what I’m getting at.
Like, I think you’re assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they’re executing, not deliberately, but by default without thinking about it, that requires effort not to execute.
We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad?
I can imagine, after reading the sequences, continuing to have this bias in my own thoughts, but I don’t see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.
I mean people don’t necessarily fully internalize everything they read, and in some people the “hold on what am I doing?” can be weak? <shrug>
I mean I certainly don’t want to rule out deliberate malice like you’re talking about, but neither do I think this one snippet is enough to strongly conclude it.
I don’t think this follows. I do not see how degree of wrongness implies intent. Eliezer’s comment rhetorically suggests intent (“trolling”) as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.
I would say moreover, that this is the sort of mistake that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.
Is it contrary to everything Eliezer’s ever written? Sure! But reading the entirety of the Sequences, calling yourself a “rationalist”, does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.
I think we can only infer intent like you’re talking about if the person in question is, actually, y’know, thinking about what they’re doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; not autopiloting takes effort, and doing that work may be what a “rationalist” is supposed to do, it’s still not the default. All I think we can infer from this is a failure to do the work to shift out of autopilot and think. Bad group epistemology via laziness rather than via intent strikes me as the more likely explanation.
I want to more or less second what River said. Mostly I wouldn’t have bothered replying to this… but your line of “today around <30” struck me as particularly wrong.
So, first of all, as River already noted, your claim about “in loco parentis” isn’t accurate. People 18 or over are legally adults; yes, there used to be a notion of “in loco parentis” applied to college students, but that hasn’t been current law since about the 60s.
But also, under 30? Like, you’re talking about grad students? That is not my experience at all. Undergrads are still treated as kids to a substantial extent, yes, even if they’re legally adults and there’s no longer any such thing as “in loco parentis”. But in my experience grad students are, absolutely, treated as adults, nor have I heard of things being otherwise. Perhaps this varies by field (I’m in math) or location or something, I don’t know, but I at least have never heard of that before.
I’m not involved with the Bay Area crowd but I remember seeing things about how Leverage is a scam/cult years ago; I was surprised to learn it’s still around...? I expected most everyone would have deserted it after that...
I do worry about “ends justify the means” reasoning when evaluating whether a person or project was or wasn’t “good for the world” or “worth supporting”. This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake—and also believes the project is doing something new/experimental that current civilization is inadequate for—there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of “high-demand groups” (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.
There is (roughly) a sequences post for that. :P
Seems to me the story in the original yak-shaving story falls into case 2 -- the thing to do is to forget about borrowing the EZPass and just pay the toll!
There used to be an Ann Arbor LW meetup group, actually, back when I lived there—it seems to be pretty dead now best I can tell but the mailing list still exists. It’s A4R-A2@googlegroups.com; I don’t know how relevant this is to you, since you’re trying to start a UM group and many of the people on that list will likely not be UM-affiliated, but you can at least try recruiting from there (or just restarting it if you’re not necessarily trying to specifically start a UM group). It also used to have a website, though I can’t find it at the moment, and I doubt it would be that helpful anyway.
According to the meetup group list on this website, there’s also is or was a UM EA group, but there’s not really any information about it? And there’s this SSC meetup group listed there too, which has more recent activity possibly? No idea who’s in that, I don’t know this Sam Rossini, but possibly also worth recruiting from?
So, uh, yeah, that’s my attempt (as someone who hasn’t lived in Ann Arbor for two years) to survey the prior work in this area. :P Someone who’s actually still there could likely say more...
Oh, huh—looks like this paper is the summary of the blog series that “Slime Mold Time Mold” has been written about it? Guess I can read this paper to skip to the end, since not all of it is posted yet. :P
Yeah. You can use language that is unambiguously not attack language, it just takes more effort to avoid common words. In this respect it’s not much unlike how discussing lots of other things seriously requires avoiding common but confused words!
I’m reminded of this paper, which discusses a smaller set of two-player games. What you call “Cake Eating” they call the “Harmony Game”. They also use the more suggestive variable names—which I believe come from existing literature—R (reward), S (sucker’s payoff), T (temptation), P (punishment) instead of (W, X, Y, Z). Note that in addition to R > P (W > Z) they also added the restrictions T > P (Y > Z) and R > S (W > X) so that the two options could be meaningfully labeled “cooperate” and “defect” instead of “Krump” and “Flitz” (the cooperate option is always better for the other player, regardless of whether it’s better or worse for you). (I’m ignoring cases of things being equal, just like you are.)
(Of course, the paper isn’t actually about classifying games, it’s an empirical study of how people actually play these games! But I remember it for being the first place I saw such a classification...)
With these additional restrictions, there are only four games: Harmony Game (Cake Eating), Chicken (Hawk-Dove/Snowdrift/Farmer’s Dilemma), Stag Hunt, and Prisoner’s Dilemma (Too Many Cooks).
I’d basically been using that as my way of thinking about two-player games, but this broader set might be useful. Thanks for taking the time to do this and assign names to these.
I do have to wonder about that result that Zack_M_Davis mentions… as you mentioned, where’s the Harmony Game in it? Also, isn’t Battle of the Sexes more like Chicken than like Stag Hunt? I would expect to see Chicken and Stag Hunt, not Battle of the Sexes and Chicken, which sounds like the same thing twice and seems to leave out Stag Hunt. But maybe Battle of the Sexes is actually equivalent, in the sense described, to Stag Hunt rather than Chicken? That would be surprising, but I didn’t set down to check whether the definition is satsified or not...
I suppose so. It is at least a different problem than I was worried about...
Huh. Given the negative reputation of bioethics around here—one I hadn’t much questioned, TBH—most of these are suprisingly reasonable. Only #10, #16, and #24 really seemed like the LW stereotype of the bioethics paper that I would roll my eyes at. Arguably also #31, but I’d argue that one is instead alarming in a different way.
Some others seemed like bureaucratic junk (so, neither good nor bad), and others I think the quoted sections didn’t really give enough information to judge; it is quite possible that a few more of these would go under the stereotype list if I read these papers further.
#1 is… man, why does it have to be so hostile? The argument it’s making is basically a counter-stereotypical bioethics argument, but it’s written in such a hostile manner. That’s not the way to have a good discussion!
Also, I’m quite amused to see that #3 basically argues that we need what I’ve previously referred to here as a “theory of legitimate influence”, for what appear likely to be similar reasons (although again I didn’t read the full thing to inspect this in more detail).
Consider a modified version of the prisoner’s dilemma. This time, the prisoners are allowed to communicate, but they also have to solve an additional technical problem, say, how to split the loot. They may start with agreeing on not betraying each other to the prosecutors, but later one of them may say: “I’ve done most of the work. I want 70% of the loot, otherwise I am going to rat on you.” It’s easy to see how the problem would escalate and end up in the prisoners betraying each other.
Minor note, but I think you could just talk about a [bargaining game}(https://en.wikipedia.org/wiki/Cooperative_bargaining), rather than the Prisoner’s Dilemma, which appears to be unrelated. There are other basic game theory examples beyond the Prisoner’s Dilemma!
Ha! OK, that is indeed nasty. Yeah I guess CASes can solve this kind of problem these days, can’t they? Well—I say “these days” as if it this hasn’t been the case for, like, my entire life, I’ve just never gotten used to making routine use of them...