Maybe a spreadsheet would be more informative. You could easily aggregate many answers.
Ramiro P.
If you grab your mobile with your dirty hands, then wash them, and then use your device again, you just recontaminated them; and if you never clean its surface (how do we do it effectively?), it’ll accumulate pathogens. This seems to be a serious problem in hospitals.
(I’m not sure if I follow your reasoning; it apparently implies that, if you never shake hands with someone else, you never have to worry about washing them. Of course, it does reduce the potential for transmission.)
We should take into account the welfare of others, too. Besides protecting me from disease, washing my hands prevents me from transmitting it to someone else. It’s pretty much analogous to vaccines.
I ’ve seen somethings concerning how dirty cellphones are—and how they can worsen interpersonal disease transmission. I wonder if there’s any advice on how to keep it clean (and how useful would it be).
I do think Lanrian nailed it: there’s no process ensuring fitness selection in dating. On the other hand, we are wasting an opportunity to go meta here: if everyone were capable of mimicking the features of a picture that make it successful, then those features should lose their importance, since they are not reliably signaling that someone is a good mate. If I’m not bright enough to see through a carefully planned image of a smile and a discrete cleavage, I am probably not bright enough to get a similarly attractive picture for myself. Plus, usual cognitive bias: I often mistake how much I like my pics (for the emotions they evoke, for example) with my assessment of their quality—and so irrationally rate them above average. But yeah, now i can use Photofeeler
I find LeCun’s insistence on the analogy with legal systems particularly interesting, because they remind me more Russell’s proposal of “uncertain objectives” than the “maximize objective function” paradigm. At least in liberal societies, we don’t have a definite set of principles and values that people would agree to follow—instead, we aim at principles that guarantee an environment where any reasonable person can reasonably optimize for something like their own comprehensive doctrine.
However, the remarkable disanalogy is that, even if social practices change and clever agents adapt faster than law can evolve (as Goodhart remarks), the difference is not so great as with the technological pace.
So far, LW is still online. It means:
a) either nobody used their launch codes, and you can trust 125 nice & smart individuals not to take unilateralist action—so we can avoid armageddon if we just have coordinated communities with the right people;
b) nobody used their launch codes, because these 125 are very like-minded people (selection bias), there’s no immediate incentive to blow it up (except for some offers about counterfactual donations), but some incentive to avoid it (honor!… hope? Prove EDT, UDT...?). It doesn’t model the problem of MAD, and surely it doesn’t model Petrov’s dilemma—he went against express orders to minimize the chance of nuclear war, so risking his career (and possibly his life).
c) or this a hoax. That’s what I would do; I wouldn’t risk a day of LW just to prove our honor (sorry, I grew up in a tough neighborhood and have problems with trusting others).
My point is: I think (b) and (c) are way more likely than (a), so I’d use the launch codes, and take the risk of ostracism, if I had them. I think it would yield higher expected-utility; as I said, I wouldn’t risk a day of LW to prove our honor, but I’d do it to prove you shouldn’t play Petrov lightly.
Please, correct me if I’m wrong.
P.S.: (d) this allows anyone to claim having launch codes and mugger others into counterfactual donations—which is brilliant.
I thought he was being ambiguous on purpose, so as to maximize donations.
I am wondering about the link between the notion of distance (in the first post), extremes in a utility scale, and big deal. That’s me in 15′
My opinion (“epistemic status”): dunno.
I remember an issue in The Economist in 2013 about it. There’s some argument among economists on the absence of productivity improvements, despite the buzz over AI and ICT; Erin Brynjolfsson argues that it takes some time for global pervasive technologies to have an impact (e.g.: electricity). However, the main point of Thiel & Weinstein is that we haven’t found new breakthroughs that are easy to profit from.
But it reminds me Cixin Liu’s Dark Forest context, where:
humankind stalled because Physics breakthroughs were prevented by the Sophon Barrier—even so, they built a utopian society thanks to cheap energy from fusion power.
Your argumetn is sound. For me, it’s curious that development economists almost never mention the temperature x productivity relation—except for J. Sachs (who mixes it with other geographical factors) and Nordhaus (who got a Nobel Prize for reasoning about it).
Ok, but the point is: how do you aggregate this in a prediction market? You have no incentive to bet on Earth’s doom
There’s a cool name for this donor’s action: blindspotting (yeah, it’s written like this) - after a Roy Sorensen book from 1988.
My average is a week, I guess. I only share ideas I can’t falsify or take out of my head; and I try them with close friends first. And I admit I’m kinda sensitive to negative feedback.