I mean this as agreement with the “accuracy isn’t a top priority” theory, plus an amused comment about how the aside embodies that theory by acknowledging the existence of a more accurate theory which does not get prioritized.
Ninety-Three
Ah, I was going off the given description of linearity which makes it pretty trivial to say “You can sum two days of payouts and call that the new value”, looking up the proper specification I see it’s actually about combining two separate games into one game and keeping the payouts the same. This distribution indeed lacks that property.
You can make it work without an explicit veto. Bob convinces Alice that Carol will be a valuable contributor to the team. In fact, Carol does nothing, but Bob follows a strategy of “Do nothing unless Carol is present”. This achieves the same synergies:
A+B: $0 (Venture needs action from both A and B, B chooses to take no action)
A+C: $0 (Venture needs action from both A and B)
B+C: $0 (Venture needs action from both A and B)
A+B+C: $300
In this way Bob has managed to redirect some of Alice’s payouts by introducing a player who does nothing except remove a bottleneck he added into his own playstyle in order to exploit Alice.
Shapley values are the ONLY way to guarantee:
Efficiency — The sum of Shapley values adds up to the total payoff for the full group (in our case, $280).
Symmetry — If two players interact identically with the rest of the group, their Shapley values are equal.
Linearity — If the group runs a lemonade stand on two different days (with different team dynamics on each day), a player’s Shapley value is the sum of their payouts from each day.
Null player — If a player contributes nothing on their own and never affects group dynamics, their Shapley value is 0.
I don’t think this is true. Consider an alternative distribution in which each player receives their full “solo profits”, and receives a share of each synergy bonus equal to their solo profits divided by the sum of all solo profits of all players involved in the synergy bonus. In the above example, you receive 100% of your solo profits, 30/(30+10)=3/4 of the You-Liam synergy, 30/(30+20)=3/5 of the You-Emma synergy, and (30/30+20+10)=1/2 of the everyone synergy, for a total payout of $159. This is justified on the intuition that your higher solo profits suggest you are doing “more work” and deserve a larger share.
This distribution does have the unusual property that if a player’s solo profits are 0, they can never receive any payouts even if they do produce synergy bonuses. This seems like a serious flaw, since it gives “synergy-only” players no incentive to participate, but unless I’ve missed something it does meet all the above criteria.
Having thought about the above more, I think “accuracy isn’t a top priority” is a better theory than the one expressed here, but if I don’t publish this now it will probably be months.
I like how this admission supports the “accuracy isn’t a top priority” theory.
His defense on the handshake is to acknowledge that he lied about the 3 millisecond timeout but the story is still true anyway. This is the opposite of convincing! What do you expect a liar to say, “Dang, you got me”? Elsewhere, to fix another plot hole he needs to hypothesize that Sun was shipping a version of Sendmail V5 which had been modified for backwards compatibility with V8 config files.
There is some number of suspicious details at which it becomes appropriate to assume the story is made up, and if you don’t think this story meets that bar then I have a bridge to sell you.
This claims that connect calls were aborted after 3 milliseconds and could successfully connect to servers within 3 light milliseconds, but that doesn’t make sense because connecting to a server 500 miles away should result in it sending a handshake signal back to you, which would be received 6 milliseconds after the call had been made and 3 milliseconds after it had been aborted.
This story appears to be made up.
If investigating things was was free, sure. But the reason we don’t investigate things is that doing so takes time, and the expected value of finding something novel is often lower than the expected cost of an investigation. To make it concrete, the story as presented is an insane way to run a company and would result in spending an enormous number of engineer hours on wild goose chases. If I as the CEO found out a middle manager was sending out engineers on four day assignments to everyone who writes us a crazy-sounding letter, I would tell him to immediately stop wasting company resources.
I have no strong opinion on whether society investigates too many or too few of these claims, but I keep observing that many people’s models seem to lack the “maybe he’s lying” theory, which would give them an inflated estimate of the expected value for investigating things.
Link. But you know you can just go onto Ligben and type in the name yourself, right? You don’t need to ask for a link.
This story isn’t true. It is an urban legend and intrinsically hard to confirm, but we can be quite confident this version of the story is false because almost every detail has been changed from the original telling (as documented in Curses! Broiled Again!, a collection of urban legends available on Libgen) where it was a woman calling the car dealership which sent a mechanic, and the vapor lock formed because vanilla ice cream was slower to buy because it had to be hand-packaged.
When someone says something incredibly implausible is happening, the more reasonable explanation is not that it somehow makes sense, it’s that they’re making shit up.
It’s also more commonly used as a cat tranquilizer, so even within the “animal-medications” frame, horse is a bit noncentral. I suspect this is deliberate because “horse tranquilizer” just sounds hardcore in a way “cat tranquilizer” doesn’t.
This proposal increases the influence of the states, in the sense of “how much does it matter that any given person bothered to vote?”, but does it increase their preference satisfaction? If the 4 states each conceive of themselves as red or blue states, then each of them will be thinking “under the current system I estimate an X% chance that we’ll elect my party’s president while under the new system I estimate a Y% chance we’ll elect my party’s president”. If both sides are perfect predictors then one will conclude that Y<X so they should not do the deal. If both sides are imperfect predictors such that they both think Y>X, then the outside view still tells them it’s equally likely that they’re the sucker here and shouldn’t participate.
Smaller communities have a lot more control over their gatekeeping because, like, they control it themselves, whereas the larger field’s gatekeeping is determined via openended incentives in the broader world that thousands (maybe millions?) of people have influence over.
Does the field of social psychology not control the gatekeeping of social psychology? I guess you could argue that it’s controlled by whatever legislative body passes the funding bills, but most of the social psychology incentives seem to be set by social psychologists, so both small and large communities control their gatekeeping themselves and it’s not obvious to me why smaller ones would do better.
At some level of smallness your gatekeeping can be literally one guy who decides whether an entrant is good enough to pass the gate, and I acknowledge that that seems like it could produce better than median selection pressure. But by the time you get big enough that you’re talking about communities collectively controlling the gatekeeping… aren’t we just describing the same system at a population of one thousand vs one hundred thousand?
I could imagine an argument that yes actually, differences of scale matter because larger communities have intrinsically worse dynamics for some reason, but if that’s the angle I would expect to at least hear what the reason is rather than have it be left as self-evident.
A small research community of unusually smart/competent/well-informed people can relatively-easily outperform a whole field, by having better internal memetic selection pressures.
It’s not obvious to me that this is true, except insofar as a small research community can be so unusually smart/competent/etc that their median researcher is better than a whole field’s median researcher so they get better selection pressure “for free”. But if an idea’s popularity in a wide field is determined mainly by its appeal to the median researcher, I would naturally expect its popularity in a small community to be determined mainly by its appeal to the median community member.
This claim looks like it’s implying that research communities can build better-than-median selection pressures but, can they? And if so why have we hypothesized that scientific fields don’t?
I think Valentine gave a good description of psychopath as “people who are naturally unconstrained by social pressures and have no qualms breaking even profound taboos if they think it’ll benefit them”, where just eyeballing human nature, that seems to be a “real” category that would show up as a distinct blip in a graph of human behaviour and not just “how constrained by social pressures people are is a normally distributed property and people get called psychopaths in linear proportion to how far left they are on the bell curve”.
Yep, your intended meaning about the distinctive mental architecture was pretty clear, just wanted to offer the factual correction.
They made it so the sociopath at the top of the pyramid was the kind that’s clever and myopic and numerate and invested in the status quo
The word “myopic” seems out of place in this list of positive descriptors, especially contrasted with crazed gloryhounds. Was this supposed to be “farsighted”?
By “psychopath” I mean someone with the cluster B personality disorder.
There isn’t a cluster B personality disorder called psychopathy. Psychopathy has never been a formal disorder and the only time we’ve ever been close to it is way back in 1952 when the DSM-1 had a condition called “Sociopathic Personality Disturbance”. The closest you’ll get these days is Antisocial Personality Disorder, which is a garbage bin diagnosis that covers a fairly broad range of antisocial behaviours, including the thing most people have in mind when they say “psychopath”, but also plenty of other personality archetypes that don’t seem particularly psychopathic, like adrenaline junkies and people with impulse control issues.
I think you might be living in a highly-motivated smart and conscientious tech worker bubble.
Like, in a world where the median person is John Wentworth
“What if the entire world was highly-motivated smart and conscientious tech workers?” is the entire premise here.
If there was a unified actor called The Democrats that chose Biden, it chose poorly sure. But it seems very plausible that there were a bunch of low-level strategists who rationally thought “Man, Biden really shouldn’t run but I’ll get in trouble if I say that and I prefer having a job to having a Democratic president” plus a group of incentive-setters who rationally thought they would personally benefit more from creating the conditions for that behaviour than from creating conditions that would select the best candidate.
It’s not obvious to me that this is a thinking carefully problem and not a principal-agent problem.