I’m an aspiring EA / rationalist. My previous posts were intended in a slightly tongue-in-cheek way (but conveying ideas I take seriously) - future posts will endeavour to lay out my ideas more clearly/ in keeping with LW norms.
Dzoldzaya
I wonder what you think of the super-setting weights vs. HIIT trade-off?
I’ve gone full circle on this—I used to prefer HIIT, then I switched to hypertrophy-style weight training (mainly after watching “exercise science youtube, RP etc.”), now I’ve gone back to HIIT for most workouts. A typical workout will look like a 21-15-9 progression of 5 or 6 exercises, e.g. weighted squats, pull-ups, burpees, lunges, kettlebell swings, press-ups, leg raises, box jumps, or (relatively light) deadlifts or olympic lifts for 15-25 mins. My heart rate usually stays above 140, and hits VO2 max at some point.
To me, HIIT feels way better, and more time efficient. Hypertrophy training (even with supersets, which are definitely better) still feels a bit more like a chore, and I never get a “buzz”.
I don’t have good enough theory of mind to know which is best to recommend to others, though.
There’s a world of difference between “let’s just continue doing this research project on something obscure with no theory of impact because penicillin” and “this is more likely than not to become irrelevant in 18 months time, but if it works, it will be a game-changer”.
Robustness and generalizability are subcomponents of the expected value of your work/research. If you think that these are neglected, and that your field is too focused on the “impact” components of EV, i.e. there are too many moon shots, please clarify that, but your analogy fails to make this argument.
As it is, I suspect that optimizing for robust generalizability is a sure-fire way of ensuring that most people become “very general helpers”, which seems like a very harmful thing to promote.
Despite finishing your comment in a way that I hope we can all just try to ignore… you make an interesting point. The Pollywog example works well, if accurate. If wild animal suffering is the worst thing in the world, it follows that wild animal pleasure could easily the best thing in the world, and it might be a huge opportunity to do good in the world if we can identify species for which this is true. This seems like one of the only ways to make the world net-positive, if we do choose to maintain biological life.
But, tragically, I think that’s a difficult case to make for most animals. Omnizoid addresses it partly: “If you only live a few weeks and then die painfully, probably you won’t have enough welfare during those few weeks to make up for the extreme badness of your death. This is the situation for almost every animal who has ever lived.” But I think he understates it here.
Most vertebrates are larval fish. 99%+ of fish larvae die within days. For a larval fish, being eaten by predators (about 75%, on average) is invariably the best outcome, because dying of starvation, temperature changes, or physiological failure (the other 25%) seems a lot worse.When they do experiments by starving baby fish to death (your reminder that ethics review boards have a very peculiar definition of ethics), they find that most sardines born in a single spawning don’t even start exogenous feeding, and survive for a few days from existing energy reserves. I would speculate that much of this time is spent in a state of constant hunger stress, driven by an extremely high metabolism and increasing cortisol levels, and for the vast majority who cannot secure food, their few hours-days of existence probably look a lot more like a desperate struggle until they gradually weaken and lose energy before dying. This is partly because they were born too small to ever have a chance of exogenous feeding—like a premature human baby unable to suckle, most don’t have the suction force to consume plankton.
I don’t doubt that there might be some pleasure there to balance out the suffering, but it seems like a hard sell for most K-strategists.
If you’ll forgive diving into the linguistics here, English is atypical in that we have a distinctive present perfect.
In French, the present perfect—type construction (passé composé) has subsumed the past simple, which is now reserved for literate or archaic uses, so you use that for any past experience. But you can use something close to our future perfect with a similar sense: (I will have finished my degree by March/ “J’aurai terminé mon diplôme d’ici mars”). In Chinese (and other Sino languages), there’s a present-perfect type construction, albeit used less consistently (e.g. have been to… quguo 去过), but it’s very awkward to use future perfect-style constructions: “I just want to have read this”. I’m sure many obscure languages will have even less of a distinction.
So, if the theory in your title is correct, people from languages without a present perfect will have unruined lives, or at least lives ruined by different grammatical structures (we’re looking at you, subjonctif...)! This sounds like a testable theory to me!There’s a bunch of theories around language influencing thought patterns, e.g. the idea that people who speak future-less languages save more etc. https://www.anderson.ucla.edu/faculty/keith.chen/papers/LanguageWorkingPaper.pdf , so you could test something similar for perfect tenses. I hear that some Italian and Spanish dialects differ in whether the present-perfect and past simple are distinct. So you might have a great natural experiment there.
My personal hobby horse here is counterfactual conditionals, usually used to express regret (mainly: “If I had done x...” or “I should have done x …”, “I wish I’d done x …”). I used to have harmful, regretful thought patterns like this until I learned Chinese very immersively. I realised that, in Chinese-thinking mode, I had stopped using these conditionals in my train of thought, and noticed them returning when I reintegrated into an English-speaking context. It wasn’t a clean experiment by any means—it could have been partly due to thinking in a non-native language, which made (over-) thinking slower, and of course, I was living in China, with obviously massive lifestyle effects. But still, I did identify these conditional thought patterns as almost definitely negative, so I’m convinced there’s at least some effect.I haven’t noticed “wanting to have done something” being less common in Chinese- or French-mode because of a more limited present perfect tense, but it would be interesting if bilinguals on LW have noticed something there.
I’m not saying to endorse prejudice. But my experience is that many types of prejudice feel more obvious. If someone has an accent that I associate with something negative, it’s usually pretty obvious to me that it’s their accent that I’m reacting to.
Of course, not everyone has the level of reflectivity to make that distinction. But if you have thoughts like “this person gives me a bad vibe but maybe that’s just my internalized prejudice and I should ignore it”, then you probably have enough metacognition to also notice if there’s any clear trait you’re prejudiced about, and whether you would feel the same way about other people with that trait.
It seems like the most common situation when you’d ignore bad vibes would be when a trait like this confuses your signals. When you identify a negative trait that “feels more obvious”, especially if it’s socially taboo to be prejudiced against (race, ethnicity/accent, LGBT-status, mental/physical disability), this can interfere with your ability to correctly interpret other evidence (including “vibes”), so that it’s very easy to overcompensate the other way.
The classic example from women’s self-defence classes: you enter an enclosed space (e.g. a lift) with a man of a particular ethnicity who makes you instantly nervous. You consider not getting in, but then think “oh, this must just be his ethnicity I’m reacting to”, castigate yourself for your prejudice, ignore the bad vibes, get in any way, and it turns out he was dodgy.
Or a neuro-atypical colleague suggests a small business venture in a manner that would normally raise red flags. You get “bad vibes”, but you interpret this as irrational prejudice against autistic behaviour traits, so you go along with it despite your vibes. Only later do you realise that your red flags were real, and your correction for prejudice was adding unnecessary noise into your decision-making.
I don’t know whether there’s evidence to back this up, but my sense is that “correction for potential prejudice” would be the major source of error here, especially among people who are more reflective.
I explained why I think tracing back personal history is impractical.
Your separate method to spot check my model is just a simplified version of the same model.
Well, you can stick your own numbers into the model and see what you get—a few tweaks in the estimates puts farmer ancestors higher, as would assuming more prehistoric lineage collapses.
For example, if you think that almost everyone who had offspring from 2000BC-1200AD was your ancestor, then you get more farmer ancestors. I initially put it closer to 40% (assuming little to no Sub-Saharan or Native American ancestry, and a more gradual spread throughout Eurasia), but the model is sensitive to these estimates.From a “Eurasia-centric” perspective, my sense is that personal ancestry doesn’t make a major difference except for pockets like Siberia and Iceland, perhaps. It’s noticeably different for people with some New World or Sub-Saharan ancestry, and wildly different if you’re pure-blooded Aboriginal Australian.
Sorry, just read this response.
On the intuition question, my intuition was probably the other way because most of human history was non-farming, and because the vast majority of farmers (those born in the last millennium) weren’t my ancestors.
I updated my model to account for an error—it’s now a bit closer. 7.8 billion non-farmers to 6.4 billion farmers, and 4.9 billion exclusive farmers, but I still basically stand by the logic.To respond to your question, why I didn’t pick a fixed number of personal ancestors:
We have fewer recent ancestors, assuming 16 generations, we’d have around 20k to 50k ancestors at 1600. (2^16 - inbreeding). If we want to count these ancestors carefully, we should count back with an algorithm accounting for population size and exponentially increasing inbreeding.
We could also plausibly try to use this strategy to draw a more accurate number of ancestors from 1200-1600--- this might be a period where individual/geographical differences, or population constraints, play a significant role. If you’re Icelandic, most of your ancestors in this period will still be from Iceland, but if you are Turkish, your ancestors from this period are more likely to extend from Britain to Japan. My model doesn’t do this, because it sounds difficult, and because the numbers are negligible anyway- I just estimate that 0.1% to 1% of total humans born from 1200- today were my ancestors.
By around 1200 AD, it surely becomes impractical to rely on a personal family tree to track ancestry, because of the exponential growth in the number of ancestors. Beyond that point, your total potential ancestors (in the billions, without factoring in inbreeding) massively exceed the global population (in the 100s of millions). The limited population size becomes the constraint.
So an Italian might assume that they are descended from a significant portion (40%?) of Europe’s population in 1200 AD. By 800 AD, this would extend to a majority (60%?) of people living across Eurasia and Northern Africa. By the time we reach 500 BC to 1000 AD, it’s likely that most people from the major Old World civilizations and peripheries (where the bulk of the global population lived) were direct ancestors of people alive today. My numbers could be way off, but I think this is a better way of getting in the right ballpark than trying to trace back individual ancestry. I used these figures as a baseline. https://www.prb.org/articles/how-many-people-have-ever-lived-on-earth/
You’re right that I don’t account for major bottlenecks—my assumption is that they basically even out over time, and there’s a constant 20-60% chance of humans born in each period not passing down ancestors to the modern day. If you wanted to refine this model you’d take into account more recent (e.g. Black Death) and less recent (Neolithic Y-chromosome bottleneck) bottlenecks.
Ah, interesting. His Guerre des intelligences does seem more obviously accelerationist, but his latest book gives slightly different vibes, so perhaps his views are changing.
But my sense is that he actually seems kind of typical of the polémiste tradition in French intellectual culture, where it’s more about arguing with flair and elegance than developing consistent arguments. So it might be difficult to find a consistent ideology behind his combination of accelerationism, a somewhat pessimistic transhumanism, and moderate AI fear.
Thanks for this post, great to have this overview!
I can’t put my finger on whether Laurent Alexandre is an accelerationist—I don’t know his work too well, but he seems to acknowledge at least some AI-risk arguments.
This is a quote (auto-translated) from his new book:
“The political dystopia described by Harari, predicting that the world of tomorrow would be divided into “gods and useless people,” could unfortunately become a social reality.Regulating a force as monumental as ChatGPT and its successors would require international cooperation. However, the world is at war. Each geopolitical bloc will use the new AIs to manipulate the adversary and develop destructive or manipulative cyber weapons.”
My initial intuition was “surely there were more non-farmers”, but I did some calculations and it looks closer than I thought.
I had a go at a guesstimate model, where I estimate the number of humans who lived in each period, the % of them having offspring, the chance that I descend from them, and an estimate % who are farmers in each period.
I get 11 billion non-farming ancestors, and 4.6 billion farming ancestors (around 3.6 billion exclusively/mainly farmers).
What I see as the “crux period” is 0 BC − 1200 AD; I can’t find any data how many of the humans in that period are likely to have been my/your ancestors. I’ve put 15-40%, but if it’s closer to 60%, farmers might edge it. Also, I haven’t accounted for lineages ending—aside from individuals not having offspring (which I take as a constant in the model), there may have been some huge lineage collapses, presumably more before farming than after.
Of course, but there reaches a level of sun exposure at which the marginal increased harm becomes negligible compared to other things that damage your skin (see this meta-analysis—photo-aging is just one component among many), and below that level you’re probably actually getting suboptimal levels of UV exposure for skin health (see this article for benefits of UV—from Norway, aptly).
I’d love to see someone try to measure and compare the specific trade-offs, but I strongly suspect that people at northern latitudes should just trust common sense—only wear sunscreen in summer months, and when you’re actually exposed to the sun for extended periods.
I know LW is US/ California heavy, but just as a counter to all the sunscreen advocates here, daily sunscreen use is probably unnecessary, and possibly actively harmful, in winter and/or at northern latitudes.
There doesn’t seem to be much data on using sunscreen when there’s no real risk to skin, but you can find a modelling study here:
”There is little biological justification in terms of skin health for applying sunscreen over the 4–6 winter months at latitudes of 45° N and higher (most of Europe, Canada, Hokkaido, Inner Mongolia etc.) whereas year-round sunscreen is advised at latitudes of 30° N (e.g. Southern U.S., Shanghai, North Africa) and lower … Using products containing UV filters over the winter months at more northerly latitudes could lead to a higher number of people with vitamin D deficiency.”
Although most approved sunscreens are generally seen as safe, there are potential systemic health risks from a few products, some proven environmental harms, a potentially increased risk of vit-D deficiency, and some time/financial costs.
There should be a question at the end: “After seeing your results, how many of the previous responses did you feel a strong desire to write a comment analyzing/refuting?” And that’s the actual rationalist score...
But I’m interested that there might be a phenomenon here where the median LWer is more likely to score highly on this test, despite being less representative of LW culture, but core, more representative LWers are unlikely to score highly.
Presumably there’s some kind of power law with LW use (10000s of users who use LW for <1 hour a month, only 100s of users who use LW for 100+ hours a month).
I predict that the 10000s of less active community members are probably more likely to give “typical” rationalist answers to these questions: “Yeah, (religious) people stupid, ghosts not real, technology good”. The 100s of power users, who are actually more representative of a distinctly LW culture, are less likely to give these answers.
I got 9⁄24, by the way.
I think your intuitions are generally correct, and as I say, it’s usually a good heuristic to avoid overly processed food. In the absence of other evidence, if you’re in a food market where everything is edible, you should probably opt for the less processed option. I also don’t disagree with it playing a role in national health guidelines.
But it’s a very imprecise heuristic, and I think LessWrong-ers with aspirations to understand the world more accurately should feel a bit uncomfortable with it, especially when benign and beneficial processes are lumped together with those with much clearer mechanisms for harm.
Thanks for this piece. I admit I have always had a bit of residual aversion to seed oils that I’ve struggled to shake.
Having said that, as you’re pushing so strongly against seed oils in favour of “processing” as a mechanism for poor health, I think I need to push back a bit.If you want to be healthier, we know ways you can change your diet that will help: Increase your overall diet “quality”. Eat lots of fruits and vegetables. Avoid processed food. Especially avoid processed meats.
“Avoid processed food” works very well as a heuristic—far better than anything like the “nutrition pyramid”, avoiding saturated fats/sugars or calorie counting etc. But it also seems like something that should annoy people who like clear thinking and taxonomies.
As you note, “processing” includes hundreds of processes, most of which have no plausible mechanism by which they might harm human health. Articles describing the ultra-processed taxonomy often just list a litany of bad-sounding things without an explanation why they’re bad e.g. “mechanically separated meat”, “chemical modifications” and “industrial techniques”. Most of these are either benign when you think about it (we’d all prefer a strong man wearing a vest separating our meat with his bare hands, but come now...), or so vague as to be uninformative.
If ultra-processed foods are bad because they contain “hydrogenated oil, modified starch, protein isolate, and high-fructose corn syrup” or “various cosmetic additives for flavour enhancement and colour”, then it’s these products that are bad, not some mysterious processing!
If it is some technical part of the processing, like “hydrolysis, hydrogenation, extrusion, moulding, or pre-frying” that’s bad, surely we should just identify that rather than lumping everything together?
If it’s some emergent outcome of all these processes, like “hyper-palatability” or “energy density”, then that’s the problem, not the fact of being “processed”. If so we should all stop eating strawberries after they hit a certain deliciousness threshold, and avoid literally any edible oil (because all oil is identically energy-dense).
But, having said that, I still use this heuristic, and I’m pretty glad I trained myself out of preferring highly-processed food when I was less analytical.
Ah, thanks, okay, I get it now. That’s a very different proposition! Updated my post.
MoviePass users are selected for seeing a lot of movies. If MoviePass makes a business plan that models users as average people, it will lose a lot of money. Conditional on someone wanting to buy MoviePass, MoviePass probably should not want them as a customer.
I’m going to nitpick here and note that the marginal cost to the cinema of allowing in an extra customer is often close to zero, seeing as most films don’t sell out. It may even be positive, if they spend money on popcorn and drinks, and invite their friends who don’t have a pass.It seems from that article that the failure in the business model was partly that MoviePass was just badly managed, partly that people were abusing the system in various ways by scalping/ selling tickets/ getting hundreds of people using the same service.I checked my local cinema chain and they started running an ‘Unlimited’ service over a decade ago, and it’s still in use, so I think it remains a valid model.
Correction: I understand the MoviePass model now and the adverse selection argument makes more sense. Cinemas with a subscription model can work even with a high proportion of power users, but that’s because the externalities (popcorn, drinks, inviting friends) accrue to the cinema.
I presume the stated goal of schooling your child in this way is to set the grown-up’s mind at ease, rather than ensuring the child is left alone (which is probably the default outcome), and I expect both responses would suffice for this instrumental purpose.
I definitely appreciate these scenarios, but it’s worth looking at where things don’t seem to fit, because people will often use these details to dismiss them.
In particular, this section seems to clash with my understanding of conflict logistics and incentives.
As far as I can tell, it’s 1) practically unfeasible and 2) misaligned with both sides’ incentives to deplete their stockpiles of short- to mid-range missiles on opposition soil in such a short timeframe.
For the PRC, the main objective would be to restore deterrence and maintain regional dominance. Some missile strikes might occur, but the focus would likely be on targeting naval forces, rather than widespread strikes on land targets. Both sides would prioritise controlling the South China Sea, focusing on air superiority, naval engagements etc. The PRC wouldn’t want to spread themselves too thin, they might instead try to force the Taiwan issue through strikes on naval defences and regional infrastructure.
If the PRC continued to escalate to U.S. bases to knock-out naval and air support to the South China Sea, it’s only feasible that they’d focus on bases they could reach with land-based missiles (max range 3100 miles), like Guam and Okinawa. Striking the U.S. mainland would be logistically impractical, because China doesn’t have missile platforms anywhere nearby. Also, many of their longer-range missiles are dual-use (nuclear and conventional), so large-scale non-nuclear strikes would scupper their own deterrence. It’s also massively escalatory to attack the mainland, and risks direct nuclear exchange (where U.S. would dominate, despite massive destruction on both sides).
Also, in terms of “depleting all their stockpiles”, I don’t think either side would be able to deploy their stockpiles within two weeks. The U.S. has a decent missile stockpile deployed on submarines and Pacific fleet vessels (maybe 1⁄3 of their total), which could be launched within a few days. But even if they wanted to instantly restock Tomahawk stockpiles to keep on blasting away at the Chinese mainland, it’d take well over two weeks to get stuff over from the Atlantic, and they wouldn’t have much strategic incentive to do so.
This is partly because you get diminishing returns on missile strikes. Each additional missile has less marginal impact as key targets are destroyed or degraded and the comparative value of keeping missiles as strategic reserves for when new high-value targets emerge increases.