In America, people shopped at Walmart instead of local mom & pop stores because it had lower prices and more selection, so Walmart and other chain stores grew and spread while lots of mom & pop stores shut down. Why didn’t that happen in Wentworld?
Unnamed
I made a graph of this and the unemployment rate, they’re correlated at r=0.66 (with one data point for each time Gallup ran the survey, taking the unemployment rate on the closest day for which there’s data). You can see both lines spike with every recession.
Are you telling me 2008 did actual nothing?
It looks like 2008 led to about a 1.3x increase in the number of people who said they were dissatisfied with their life.
It’s common for much simpler Statistical Prediction Rules, such as linear regression or even simpler models, to outperform experts even when they were built to predict the experts’ judgment.
Or “Defense wins championships.”
With the ingredients he has, he has gotten a successful Barkskin Potion:
1 of the 1 times (100%) he brewed together Crushed Onyx, Giant’s Toe, Ground Bone, Oaken Twigs, Redwood Sap, and Vampire Fang.
19 of the 29 times (66%) he brewed together Crushed Onyx, Demon Claw, Ground Bone, and Vampire Fang.
Only 2 other combinations of the in-stock ingredients have ever produced Barkskin Potion, both at under a 50% rate (4/10 and 18⁄75).
The 4-ingredient, 66% success rate potion looks like the best option if we’re just going to copy something that has worked. That’s what I’d recommend if I had to make the decision right now.
Many combinations that used currently-missing ingredients reliably (100%) produced Barkskin Potion many times (up to 118⁄118). There may be a variant on one of those, which he has never tried, that could work better than 66% of the time using ingredients that he has. Or there may be information in there about the reliability of the 6-ingredient combination which worked once.
Being Wrong on the Internet: The LLM generates a flawed forum-style comment, such that the thing you’ve been wanting to write is a knockdown response to this comment, and you can get a “someone”-is-wrong-on-the-internet drive to make the points you wanted to make. You can adjust how thoughtful/annoying/etc. the wrong comment is.
Target Audience Personas: You specify the target audience that your writing is aimed at, or a few different target audiences. The LLM takes on the persona of a member of that audience and engages with what you’ve written, with more explicit explanation of how that persona is reacting and why than most actual humans would give. The structure could be like comments on google docs.
Heat Maps: Color the text with a heat map of how interested the LLM expects the reader to be at each point in the text, or how confused, how angry, how amused, how disagreeing, how much they’re learning, how memorable it is, etc. Could be associated with specific target audiences.
I don’t think that the key element in the aging example is ‘being about value claims’. Instead, it’s that the question about what’s healthy is a question that many people wonder about. Since many people wonder about that question, some people will venture an answer. Even if humanity hasn’t yet built up enough knowledge to have an accurate answer.
Thousands of years ago many people wondered what the deal is with the moon and some of them made up stories about this factual (non-value) question whose correct answer was beyond them. And it plays out similarly these days with rumors/speculation/gossip about the topics that grab people’s attention. Where curiosity & interest exceeds knowledge, speculation will fill the gaps, sometimes taking on a similar presentation to knowledge.
Note the dynamic in your aging example: when you’re in a room with 5+ people and you mention that you’ve read a lot about aging, someone asks the question about what’s healthy. No particular answer needs to be memetic because it’s the question that keeps popping up and so answers will follow. If we don’t know a sufficiently good/accurate/thorough answer then the answers that follow will often be bullshit, whether that’s a small number of bullshit answers that are especially memetically fit or whether it’s a more varied and changing froth of made-up answers.
There are some kinds of value claims that are pretty vague and floaty, disconnected from entangled truths and empirical constraints. But that is not so true of instrumental claims about things like health, where (e.g.) the claim that smoking causes lung cancel is very much empirical & entangled. You might still see a lot of bullshit about these sorts of instrumental value claims, because people will wonder about the question even if humanity doesn’t have a good answer. It’s useful to know (e.g.) what foods are healthy, so the question of what foods are healthy is one that will keep popping up when there’s hope that someone in the room might have some information about it.
7% of the variance isn’t negligible. Just look at the pictures (Figure 1 in the paper):
I got the same result: DEHK.
I’m not sure that there are no patterns in what works for self-taught architects, and if we were aiming to balance cost & likelihood of impossibility then I might look into that more (since I expect A,L,N to be the the cheapest options with a chance to work), but since we’re prioritizing impossibility I’ll stick with the architects with the competent mentors.
Moore & Schatz (2017) made a similar point about different meanings of “overconfidence” in their paper The three faces of overconfidence. The abstract:
Overconfidence has been studied in 3 distinct ways. Overestimation is thinking that you are better than you are. Overplacement is the exaggerated belief that you are better than others. Overprecision is the excessive faith that you know the truth. These 3 forms of overconfidence manifest themselves under different conditions, have different causes, and have widely varying consequences. It is a mistake to treat them as if they were the same or to assume that they have the same psychological origins.
Though I do think that some of your 6 different meanings are different manifestations of the same underlying meaning.
Calling someone “overprecise” is saying that they should increase the entropy of their beliefs. In cases where there is a natural ignorance prior, it is claiming that their probability distribution should be closer to the ignorance prior. This could sometimes mean closer to 50-50 as in your point 1, e.g. the probability that the Yankees will win their next game. This could sometimes mean closer to 1/n as with some cases of your points 2 & 6, e.g. a 1⁄30 probability that the Yankees will win the next World Series (as they are 1 of 30 teams).
In cases where there isn’t a natural ignorance prior, saying that someone should increase the entropy of their beliefs is often interpretable as a claim that they should put less probability on the possibilities that they view as most likely. This could sometimes look like your point 2, e.g. if they think DeSantis has a 20% chance of being US President in 2030, or like your point 6. It could sometimes look like widening their confidence interval for estimating some quantity.
You can go ahead and post.
I did a check and am now more confident in my answer, and I’m not going to try to come up with an entry that uses fewer soldiers.
Just got to this today. I’ve come up with a candidate solution just to try to survive, but haven’t had a chance yet to check & confirm that it’ll work, or to try to get clever and reduce the number of soldiers I’m using.
10 Soldiers armed with: 3 AA, 3 GG, 1 LL, 2 MM, 1 RR
I will probably work on this some more tomorrow.
Building a paperclipper is low-value (from the point of view of total utilitarianism, or any other moral view that wants a big flourishing future) because paperclips are not sentient / are not conscious / are not moral patients / are not capable of flourishing. So filling the lightcone with paperclips is low-value. It maybe has some value for the sake of the paperclipper (if the paperclipper is a moral patient, or whatever the relevant category is) but way less than the future could have.
Your counter is that maybe building an aligned AI is also low-value (from the point of view of total utilitarianism, or any other moral view that wants a big flourishing future) because humans might not much care about having a big flourishing future, or might even actively prefer things like preserving nature.
If a total utilitarian (or someone who wants a big flourishing future in our lightcone) buys your counter, it seems like the appropriate response is: Oh no! It looks like we’re heading towards a future that is many orders of magnitude worse than I hoped, whether or not we solve the alignment problem. Is there some way to get a big flourishing future? Maybe there’s something else that we need to build into our AI designs, besides “alignment”. (Perhaps mixed with some amount of: Hmmm, maybe I’m confused about morality. If AI-assisted humanity won’t want to steer towards a big flourishing future then maybe I’ve been misguided in having that aim.)
Whereas this post seems to suggest the response of: Oh well, I guess it’s a dice roll regardless of what sort of AI we build. Which is giving up awfully quickly, as if we had exhausted the design space for possible AIs and seen that there was no way to move forward with a large chance at a big flourishing future. This response also doesn’t seem very quantitative—it goes very quickly from the idea that an aligned AI might not get a big flourishing future, to the view that alignment is “neutral” as if the chances of getting a big flourishing future were identically small under both options. But the obvious question for a total utilitarian who does wind up with just 2 options, each of which is a dice roll, is Which set of dice has better odds?
Is this calculation showing that, with a big causal graph, you’ll get lots of very weak causal relationships between distant nodes that should have tiny but nonzero correlations? And realistic sample sizes won’t be able to distinguish those relationships from zero.
Andrew Gelman often talks about how the null hypothesis (of a relationship of precisely zero) is usually false (for, e.g., most questions considered in social science research).
A lot of people have this sci-fi image, like something out of Deep Impact, Armageddon, Don’t Look Up, or Minus, of a single large asteroid hurtling towards Earth to wreak massive destruction. Or even massive vengeance, as if it was a punishment for our sins.
But realistically, as the field of asteroid collection gradually advances, we’re going to be facing many incoming asteroids which will interact with each other in complicated ways, and whose forces will to a large extent balance each other out.
Yet doomers are somehow supremely confident in how the future will go, foretelling catastrophe. And if you poke at their justifications, they won’t offer precise physical models of these many-body interactions, just these mythic stories of Earth vs. a monolithic celestial body.
They’re critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your reputation. But I think criticism is good, because if I write a bad blog post, and someone tells me it was bad, I can learn from that, and do better next time.
I read this as saying ‘a common view is that being criticized is bad because it hurts your reputation, but as a person with some knowledge of the secret lore of rationality I believe that being criticized is good because you can learn from it.’
And he isn’t making a claim about to what extent the existing LW/rationality community shares his view.
Seems like the main difference is that you’re “counting up” with status and “counting down” with genetic fitness.
There’s partial overlap between people’s reproductive interests and their motivations, and you and others have emphasized places where there’s a mismatch, but there are also (for example) plenty of people who plan their lives around having & raising kids.
There’s partial overlap between status and people’s motivations, and this post emphasizes places where they match up, but there are also (for example) plenty of people who put tons of effort into leveling up their videogame characters, or affiliating-at-a-distance with Taylor Swift or LeBron James, with minimal real-world benefit to themselves.
And it’s easier to count up lots of things as status-related if you’re using a vague concept of status which can encompass all sorts of status-related behaviors, including (e.g.) both status-seeking and status-affiliation. “Inclusive genetic fitness” is a nice precise concept so it can be clear when individuals fail to aim for it even when acting on adaptations that are directly involved in reproduction & raising offspring.
This post reads like it’s trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.
Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they’re claiming and whether it’s true.
e.g., On the first two bullet points it’s easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law (“To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries”) and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.