Don’t let them tell us stories. Don’t let them say of the man sentenced to death “He is going to pay his debt to society,” but: “They are going to cut off his head.” It looks like nothing. But it does make a little difference.
-- Camus
Don’t let them tell us stories. Don’t let them say of the man sentenced to death “He is going to pay his debt to society,” but: “They are going to cut off his head.” It looks like nothing. But it does make a little difference.
-- Camus
“From the fireside house, President Reagan suddenly said to me, ‘What would you do if the United States were suddenly attacked by someone from outer space? Would you help us?’
“I said, ‘No doubt about it.’”
“He said, ‘We too.’”
Mikhail Gorbachev, from an interview
You assume utility of getting neither is 0 both before and after the transformation. You need to transform that utility too, eg from 0 to 4.
Lost 35lbs in the past 4 months, currently at 11.3% body fat, almost at my goal of ~9% body fat, at which point I’ll start bulking again. Average body fat is ~26% for men in my age group. My FFMI (= non-fat mass / height^2) is still somewhere above 95th percentile for men in my age group.
Got an indoor treadmill, have since walked 1100km in the past 2 months, 18km/day, 4.5hour/day average. Would definitely recommend this.
Scored 2 points short of perfect on the GRE. Got a 3.8 average for college courses over the past year.
You usually avoid unlimited liability by placing a stop order to cover your position as soon as the price goes sufficiently high. Or for instance you can bound your losses by including a term in the contract which says that instead of giving back the stock you borrowed and sold, you can pay a certain price.
Introspecting, the way I remember this is that 1 is a simple number, and type 1 errors are errors that you make by being stupid in a simple way, namely by being gullible. 2 is a more sophisticated number, and type 2 errors are ones you make by being too skeptical, which is a more sophisticated type of stupidity. I do most simple memorization (e.g. memorizing differentiation rules) with this strategy of “rationalizing why the answer makes sense”. I think your method is probably better for most people, though.
Whether they believe your confidence vs whether they believe their own evidence about your value. If a person is confident, either he’s low-value and lying about it, or he’s high-value and honest. The modus ponens/tollens description is unclear, I think I only used it because it’s a LW shibboleth. (Come to think of it, “shibboleth” is another LW shibboleth.)
Sunlight increases risk of melanoma but decreases risk of other, more deadly cancers. If you’re going to get, say, 3 times your usual daily sunlight exposure, then sunscreen is probably a good idea, but otherwise it’s healthier to go without. I’d guess a good heuristic is to get as much sunlight as your ancestors from 1000 years ago would have gotten.
You don’t need to reconstruct all the neurons and synapses, though. If something behaves almost exactly as I would behave, I’d say that thing is me. 20 years of screenshots 8 hours a day is around 14% of a waking lifetime, which seems like enough to pick out from mindspace a mind that behaves very similarly to mine.
Confidence is the alief that you have high value, and it induces confidence-signalling behaviors. People judge your value partly by actually looking at your value, but they also take the shortcut of just directly looking at whether you display those signals. So you can artificially inflate your status by having incorrect confidence, i.e. alieving that you’re more valuable than you really are. This is called hubris, and when people realize you’re doing it they reduce their valuation of you to compensate. (Or sometimes they flip that modus tollens into a modus ponens, and you become a cult leader. So it’s polarizing.)
But it is a prosocial lie that you should have incorrect underconfidence, i.e. that you should alieve that you have lower value than you actually do. This is called humility, and it’s prosocial because it allows the rest of society to exploit you, taking more value from you than they’re actually paying for. Since it’s prosocial, society paints humility as a virtue, and practically all media, religious doctrine, fiction, etc. repeatedly insists that humility (and good morals in general) will somehow cause you to win in the end. You have to really search to find media where the evil, confident guys actually crush the good guys. So if this stuff has successfully indoctrinated you (and if you’re a nerd, it probably has), then you should adjust your confidence upwards to compensate, and this will feel like hubris relative to what society encourages.
Also, high confidence has lower drawbacks nowadays than what our hindbrains were built to expect. People share less background, so it’s pretty easy to reinvent yourself. You don’t know people for as long, so you have less time for people to get tired of your overconfidence. You’re less likely to get literally killed for challenging the wrong people. So a level of confidence which is optimal in the modern world will feel excessive to your hindbrain.
Allow the AI to reconstruct your mind and memories more accurately and with less computational cost, hopefully; the brain scan and DNA alone probably won’t give much fidelity. They’re also fun from a self-tracking data analysis perspective, and they let you remember your past better.
Getting an air filter can gain you ~0.6 years of lifespan, plus some healthspan. Here’s /u/Louie’s post where I saw this.
Lose weight. Try Shangri-La, and if that doesn’t work consider the EC stack or a ketogenic diet.
Seconding James_Miller’s recommendation of vegetables, especially cruciferous vegetables (broccoli, bok choy, cauliflower, collard greens, arugula...) Just eat entire plates of the stuff often.
Write a script that takes a screenshot and webcam picture every 30 seconds. Save the files to an external hard drive. After a few decades, bury the external, along with some of your DNA and possibly brain scans, somewhere it’ll stay safe for a couple hundred years or longer. This is a pretty long shot, but there’s a chance that a future FAI will find your horcrux and use it to resurrect you. I think this is a better deal than cryonics since it costs so much less.
Which country should software engineers emigrate to?
I’m going to research everything, build a big spreadsheet, weight the various factors, etc. over the next while, so any advice that saves me time or improves the accuracy of my analysis is much appreciated. Are there any non-obvious considerations here?
There are some lists of best countries for software developers, and for expats in general. These consider things like software dev pay, cost of living, taxes, crime, happiness index, etc. Those generally recommend Western Europe, the US, Canada, Israel, Australia, New Zealand, Singapore, Hong Kong, Mexico, India. Other factors I’ll have to consider are emigration difficulty and language barriers.
The easiest way to emigrate is to marry a local. Otherwise, emigrating to the US requires either paying $50k USD, or working in the US for several years (under a salary reduction and risk that are about as bad as paying $50k), and other countries are roughly as difficult. I’ll have to research this separately for each country.
Yes, the effect of diets on weight-loss is roughly mediated by their effect on caloric intake and expenditure. But this does not mean that “eat fewer calories and expend more” is good advice. If you doubt this, note that the effect of diets on weight-loss is also mediated by their effects on mass, but naively basing our advice on conservation of mass causes us to generate terrible advice like “pee a lot, don’t drink any water, and stay away from heavy food like vegetables”.
The causal graph to think about is “advice → behavior → caloric balance → long-term weight loss”, where only the advice node is modifiable when we’re deciding what advice to give. Behavior is a function of advice, not a modifiable variable. Empirically, the advice “eat fewer calories” doesn’t do a good job of making people eat fewer calories. Empirically, advice like “eat more protein and vegetables” or “drink olive oil between meals” does do a good job of making people eat fewer calories. The fact that low-carb diets “only” work by reducing caloric intake does not mean that low-carb diets aren’t valuable.
Here’s an SSC post and ~700 comments on cultural evolution: http://slatestarcodex.com/2015/07/07/the-argument-from-cultural-evolution/
Replace “if you don’t know” with “if you aren’t told”. If you believe 80% of them are easy, then you’re perfectly calibrated as to whether or not a question is easy, and the apparent under/overconfidence remains.
About that survey… Suppose I ask you to guess the result of a biased coin which comes up heads 80% of the time. I ask you to guess 100 times, of which ~80 times the right answer is “heads” (these are the “easy” or “obvious” questions) and ~20 times the right answer is “tails” (these are the “hard” or “surprising” questions). Then the correct guess, if you aren’t told whether a given question is “easy” or “hard”, is to guess heads with 80% confidence, for every question. Then you’re underconfident on the “easy” questions, because you guessed heads with 80% confidence but heads came up 100% of the time. And you’re overconfident on the “hard” questions, because you guessed heads with 80% confidence but got heads 0% of the time.
So you can get apparent under/overconfidence on easy/hard questions respectively, even if you’re perfectly calibrated, if you aren’t told in advance whether a question is easy or hard. Maybe the effect Yvain is describing does exist, but his post does not demonstrate it.
Every child has both a mother and a father, and there are about as many men as women, so the mean number of children is about the same for males as for females. But there are more childless men than childless women, because polygyny is more common than polyandry, ultimately because of Bateman’s principle.
Depends on your feature extractor. If you have a feature that measures similarity to previously-seen films, then yes. Otherwise, no. If you only have features measuring what each film’s about, and people like novel films, then you’ll get conservative predictions, but that’s not really the same as learning that novelty is good.
I don’t understand that screenshot at all (maybe the resolution is too low?), but from your description it sounds in a similar vein to Zendo and Eleusis and Penultima, which you could get ideas from. Yours seems different though, and I’d be curious to know more details. I tried implementing some single-player variants of Zendo five years ago, though they’re pretty terrible (boring, no graphics, probably not useful for training rationality).
I do think there’s some potential for rationality improvements from games, though insofar as they’re optimized for training rationality, they won’t be as fun as games optimized purely for being fun. I also think it’ll be very difficult to achieve transfer to life-in-general, for the same reason that learning to ride a bike doesn’t train you to move your feet in circles every time you sit in a chair. (“I pedal when I’m on a bike, to move forward; why would I pedal when I’m not on a bike, and my goal isn’t to move forward? I reason this way when I’m playing this game, to get the right answer; why would I reason this way when I’m not playing the game, and my goal is to seem reasonable or to impress people or to justify what I’ve already decided?”)