It’s happened again: I’ve realized that one of my old beliefs (pre-LW) is just plain dumb.
I used to look around at all the various diet (Paleo, Keto, low carb, low fat, etc.) and feel angry at people for having such low epistemic standards. Like, there’s a new theory of nutrition every two years, and people still put faith in them every time? Everybody swears by a different diet and this is common knowledge, but people still swear by diets? And the reasoning is that “fat” (the nutrient) has the same name as “fat” (the body part people are trying to get rid of)?
Then I encountered the “calories in = calories out” theory, which says that the only thing you need to do to lose weight is to make sure that you burn more calories than you eat.
And I thought to myself, “yeah, obviously.”.
Because, you see, if the orthodox asserts X and the heterodox asserts Y, and the orthodox is dumb, then Y must be true!
Anyway, I hadn’t thought about this belief in a while, but I randomly remembered it a few minutes ago, and as soon as I remembered its origins, I chucked it out the window.
Oops!
(PS: I wouldn’t be flabbergasted if the belief turned out true anyway. But I’ve reverted my map from the “I know how the world is” state to the “I’m awaiting additional evidence” state.)
Then I encountered the “calories in = calories out” theory, which says that the only thing you need to do to lose weight is to make sure that you burn more calories than you eat.
And I thought to myself, “yeah, obviously.”.
On one hand, yeah, obviously.
On the other hand, “burning calories” is not an elementary action. Suppose I tell you to burn 500 calories now; how exactly would you achieve it? If your answer is that you would exercise or do something physically demanding, such actions spend ATP in the cells, so what if you don’t have enough ATP in your muscle cells; what is your plan B for burning calories? From the opposite side, you can limit the intake of calories. What if your metabolism is such that if you don’t provide enough calories, you will gradually fall in coma and die.
Your metabolism can make it impossible for you to reduce the “calories in” or increase the “calories out”, if it is somehow set up in a way that does not convert the calories into useful energy in your muscle cells, and it starts damaging your organs if the calories are missing in general. So while the theory is almost tautologically true, it may still be impossible to use it to lose weight. And the problem is that the proponents of “calories in = calories out” usually smugly pretend that it is an actionable advice, instead of mere description.
The actionable advice needs to be about how the metabolism works. And the things that impact it, such as what you eat, and who knows what else. Also, if you have some hormonal imbalance, or alergy, or whatever, your metabolism may differ from other people even if you eat the same things and try to live the same lifestyle. So, while e.g. eating less refined sugar would probably help everyone, no advice would guarantee a perfect outcome for everyone.
You make a good point—even if my belief was technically true, it could still have been poorly framed and inactionable (is there a name for this failure mode?).
But in fact, I think it’s not even obvious that it was technically true. If we say “calories in” is the sum of the calorie counts on the labels of each food item you eat (let’s assume the labels are accurate) then could there not still be some nutrient X that needs to be present for your body to extract the calories? Say, you need at least an ounce of X to process 100 calories? If so, then one could eat the same amount of food, but less X, and potentially lose weight.
Or perhaps the human body can only process food between four and eight hours after eating it, and it doesn’t try as hard to extract calories if you aren’t being active, so scheduling your meals to take place four hours before you sit around doing nothing would make them “count less”.
Calories are (presumably?) a measure of chemical potential energy, but remember that matter itself can also be converted into energy. There’s no antimatter engine inside my gut, so my body fails to extract all of the energy present in each piece of food. Couldn’t the mechanism of digestion also fail to extract all the chemical potential energy of species “calorie”?
Yes, there are many steps in the metabolism that are not under your conscious control. I am not an expert, so I don’t want to speculate too much about the technical details, but I think that gut bacteria probably also play a role. Simply, not everything you put in your mouth ends up necessarily in your bloodstream, and not everything that you absorbed is necessarily available in form of muscle energy.
is there a name for this failure mode?
I don’t know any standard name. Seems to me the problem is confusing “rephrasing of the desired outcome” with “an algorithm that actually get you there”. Something like:
Q: How can I get lot of money?
A: Become a millionaire!
Like, yeah, technically, everyone who successfully followed this advice ended up with lots of money, and everyone who didn’t can be accused of not following the advice properly, but that’s simply because those are two ways to describe the same thing.
Q: How can I lose weight?
A: Get rid of the extra atoms! I mean, extra calories!
Charitably, the advice is not absolutely bad, because for a hypothetical completely clueless listener it would provide some little information. But then, using this advice in practice means insinuating that your target is completely clueless, which is probably not be the case.
I want to give a big thumbs up of positive reinforcement. I thinks its great that I got to read an “oops! That was dumb, but now I’ve changed my mind.”
Thanks for the feedback! Here’s another one for ya. A relatively long time ago I used to be pretty concerned about Pascal’s wager, but then I devised some clever reasoning why it all cancels out and I don’t need to think about it. I reasoned that one of three things must be true:
I don’t have an immortal soul. In this case, I might as well be a good person.
I have an immortal soul, and after my bodily death I will be assigned to one of a handful of infinite fates, depending on how good of a person I was. In this case it’s very important that I be a good person.
Same as above, but the decision process is something else. In this case I have no way of knowing how my infinite fate will be decided, so I might as well be a good person during my mortal life and hope for the best.
But then, post-LW, I realized that there are two issues with this:
It doesn’t make any sense to separate out case 2 from the enormous ocean of possibilities allowed for by case 3. Or rather, I can separate it, but then I need to probabilistically penalize it relative to case 3, and I also need to slightly shift the “expected judgment criterion” found in case 3 away from “being a good person is the way to get a good infinite fate”, and it all balances out.
More importantly, this argument flippantly supposes that I have no way of discerning what process, if any, will be used to assign me an infinite fate. An infinite fate, mind you. I ought to be putting in more thought than this even if I thought the afterlife only lasted an hour, let alone eternity.
So now I am back to being rather concerned about Pascal’s wager, or more generally, the possibility that I have an immortal soul and need to worry about where it eventually ends up.
From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn’t quite catch it the first time around. So my first line of attack is to read through the sequences again, more carefully this time, and see if they really do give a valid reason to believe that.
From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn’t quite catch it the first time around.
I think the first step is to ask yourself what do you even mean by saying “god”.
Because if you have a definition like “the spirit of our dead chieftain, who sometimes visits me in my dreams”, I have no problem with that. Like, I find it completely plausible that you see the image of the chieftain in your dreams; nothing unscientific about that. It’s just that the image is generated by your brain, so if try to communicate with it, it can only give you advice that your brain generated, and it can only grant your wishes in the sense that you accomplish that yourself. But if you agree that this is exactly what you meant, then such god is perfectly okay for me.
But the modern definition is more like: an intelligent being that exists outside of our universe, but can observe it and change it. Suspending disbelief for a moment; how exactly can a being be “intelligent”, whether inside our universe or not? Intelligence implies processing data. That would imply that the god has some… supernatural neurons, or some other architecture capable of processing data. So the god is a mechanism (in the sense that a human is a biological mechanism) composed of parts, although those parts do not necessarily have to be found in our physics. Skill kinda plausible, maybe gods are made out of dark matter, who knows. But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.
Then the theologists try to come with some smart-sounding arguments, like “god is actually supremely simple”—therefore Occam’s razor does not disprove him, and no evolution was needed, because simple things are more likely a priori, so a supreme being is supremely likely. Or something like that. But this is nonsense, because “supremely simple” and “capable of processing information” are incompatible.
The last argument would be: okay, we do not have a coherent explanation of god, but you don’t have a coherent explanation how the universe can exist without one. And a potential answer is that maybe existence is relative. Like, if you run a simulation of Conway’s Game of Life, it could possibly contain evolved intelligent life, and that life would ask “how was our universe created”? And it’s not like you created the universe by simulating it, because you are merely following the mathematical rules; so it’s more like the math created that universe and you are only observing it. If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math. But does it exist? Well, that rules of math describe the intelligent beings inside that universe that live, think, interact with their world. Maybe that’s all that is needed for existence; or rather, the question is what else could be needed? And then, maybe our world is also just an implication of some mathematical equations.
tl;dr—maybe math itself implies existence, and if you try to turn the concept of (intelligent, preceiving, acting) god into something coherent it is no longer the god of religion but merely a powerful alien
That would imply that the god has some… supernatural neurons, or some other architecture capable of processing data.
The supernatural isn’t supposed to be the natural done all over again. The typical theological claim is that God’s wisdom or whatever is an intrinisic quality, not something with moving parts.
Well, “wisdom as an intrinsic quality” is a mysterious answer. And what is “wisdom without moving parts”? A precomputed table of wise answers to all possible questions? Who precomputed it and how?
I agree that this is how theology usually answers it, but it is an answer that doesn’t make any sense when you look at it closer; it’s just some good-sounding words randomly glued together. And if you try to make it refer to something, even a hypothetical something, the whole explanation falls apart.
That those particular can be understood in terms of the operations of their components
There’s an irreduciby basic level. Its not turtles all way down.
If it’s always the case that something that isn’t explicable in terms of its parts is mysterious, then the lowest level us mysterious. If nothing is mysterious, if you apply the argument against mysterious answer without prejudice,
reductionism is false. There isn’t a consistent set of principles here.
Continued ..
Naturalism is the claim that there is a bunch of fundamental properties that just are, at the bottom of the stack ,and everything is built up from that. Supernaturalism is the claim that the intrinsic stuff is at the top of the stack, and everything else is derived from it top-down. That may be 100% false , but it is the actual claim.
There’s a thing called the principle of charity , where you one party interprets the others statements so as to maximise their truth value. This only enhances communication if the truth is not basically in dispute...that’s the easy case. The hard case is when there is a basic dispute about whats true. In that case, it’s not helpful to fix the other person’s claims by making them more reasonable from your point of view.
Anyway, thats how we ended up with “God must have superneurons in his superbrain”.
Feels like in the top-down universe, science shouldn’t work at all. I mean, when you take a magnifying glass and look at the details, they are supposedly generated on the fly to fit the larger picture. Then you apply munchkinry to the details and invent an atomic bomb or quantum computer… which means… what exactly, from the top-down perspective?
Yeah, you can find an excuse, e.g. that some of those top-down principles are hidden like Easter eggs, waiting to be discovered later. That the Platonic idea of smartphones has been waiting for us since the creation of the universe, but was only revealed to the recent generation. Which would mean that the top-down universe has some reason to pretend to be bottom-up, at least in some aspects...
Okay, the same argument could be made that quantum physics pretends to be classical physics at larger scale, or relativity pretends to be Newtonian mechanics at low speeds… as if the scientists are trying to make up silly excuses for why their latest magic works but totally “doesn’t contradict” what the previous generations of scientists were telling us...
Well, at least it seems like the bottom-up approach is fruitful, whether the true reason is that the universe is bottom-up, or that the universe it top-down in a way that tries really hard to pretend that it is actually bottom-up (either in the sense that when it generates the—inherently meaningless—details for us, it makes sure that all consequences of those details are compatible with the preexisting Platonic ideas that govern the universe… or like a Dungeon Master who allows the players to invent all kinds of crazy stuff and throw the entire game off balance, because he values consistency above everything).
More importantly, in universe where there is magic all the way up, what sense does it make to adopt the essentially half-assed approach, where you believe in the supernatural but also kinda use logic except not too seriously… might as well throw the logic away completely, because in that kind of universe it is not going to do you much good anyway.
The basic claim of a top down universe is a short string that doesn’t contain much information. About the same amount as a basic claim of reductionism.
The top down claim doesnt imply a universe of immutable physical law, but it doesn’t contradict it either.
The same goes for the bottom-up claim. A universe of randomly moving high entropy gas is useless for science and technology, but compatible with reductionism.
But all this is rather beside the point. Even if supernaturalism is indefensible, you can’t refute it by changing it into something else.
I wouldn’t call the dead chieftain a god—that would just be a word game.
But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.
Wait wait! You say a god-like being created by evolution cannot be a creator of the universe. But that’s only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful “alien” in another universe, who itself came about from an evolutionary process in its own universe.
It might be “omniscient” in the sense that it can think 1000x as fast as us and has 1000x as much working memory and is familiar with thinking habits that are 1000x as good as ours, but that’s a moot point. The real thing I’m worried about isn’t whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
I haven’t yet put much thought into this, since I only recently came to believe that this topic merits serious thought, but the existence of such a powerful creature seems like a plausible avenue to the conclusion “I have an infinite fate and it depends on me doing/avoiding X”.
[...] Occam’s razor [...]
This is another area where my understanding could stand to be improved (and where I expect it will be during my next read-through of the sequences). I’m not sure exactly what kind of simplicity Occam’s razor uses. Apparently it can be formalized as Kolmogorov complexity, but the only definition I’ve ever found for that term is “the Kolmogorov Complexity of X is the length of the shortest computer program that would output X”. But this definition is itself in need of formalization. Which programming language? And what if X is something other than a stream of bits, such as a dandelion? And even once that’s answered, I’m not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
(All that being said, I’d like to note that I’m keeping in mind that just because I don’t understand these things doesn’t mean there’s nothing to them. Do you know of any good learning resources for someone who has my confusions about these topics?)
And it’s not like you created the universe by simulating it, because you are merely following the mathematical rules; so it’s more like the math created that universe and you are only observing it.
If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math.
That much makes sense, but I think it excludes a possibly important class of universe that is based on math but also depends on a constant stream of data from an outside source. Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
I wouldn’t call the dead chieftain a god—that would just be a word game.
Some people in history did, specifically, ancient Romans. But now we don’t. Just making it obvious.
You say a god-like being created by evolution cannot be a creator of the universe. But that’s only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful “alien” in another universe, who itself came about from an evolutionary process in its own universe.
And this is similar to the chieftain thing. You can have a religion that defines “god” as “an alien from another universe”, but many religions insist that god is not created but eternal.
The real thing I’m worried about isn’t whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
Yes, this is a practical approach. Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
There is no reason to assume that the existing religions have something in common with the simulating alien. When I play Civilization, the “people” in my simulation have a religion, but it doesn’t mean that I believe in it, or even approve it, or that I am somehow going to reward them for it.
It’s just a cosmic horror that you need to learn to live with. There are more.
the only definition I’ve ever found for that term is “the Kolmogorov Complexity of X is the length of the shortest computer program that would output X”. But this definition is itself in need of formalization. Which programming language?
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
But I agree that if you talk about small data, it is underspecified. I don’t really know what could it mean to “have a universe defined by the following three bits: 0, 0, 1”, and maybe no one has a meaningful answer to this. But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
I’m not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
Just an intuition pump: Imagine that there is a multiverse A containing all universes that can be run by programs having exactly 100 lines of code; and there is a multiverse B containing all universes that can be run by programs having exactly 105 lines of code. The universes from A are more likely, because they also appear in B.
For each program A1 describing a universe from A, you have a set of programs describing exactly the same universe in B, simply by adding an “exit” instruction on line 101, and arbitrary instructions on lines 102-105. If we take a program B1 describing a universe from B in such way that each of the 105 lines is used meaningfully, then… in multiverse A we have one A1 vs zero B1, and in multiverse B we have many A1-equivalents vs one B1; in either case A1 wins.
Do you know of any good learning resources for someone who has my confusions about these topics?
A part of this is in standard computer science curriculum, and another part is a philosophical extrapolation. I do not have a recommendation about a specific textbook, I just vaguely remember things like this from university. (Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
Then I would say the Kolmogorov complexity of the “Life-only” universe is the complexity of the rules of Life; but the complexity of the “Life+inputs” universe is the complexity of the rules plus the complexity of whoever generates the inputs, and everything they depend on, so it is the complexity of the outside universe.
The Civ analogy makes sense, and I certainly wouldn’t stop at disproving all actually-practiced religions (though at the moment I don’t even feel equipped to do that).
Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
Are you sure it’s logically possible in the strict sense? Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation! (Of course, there’s a lot of question-untangling that has to happen first, like whether “is this a simulation?” is even an appropriate question to ask. See also: Greg Egan’s book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)
It’s just a cosmic horror that you need to learn to live with. There are more.
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
(WARNING: graphic imagery) …upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).
Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn’t still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
The constant-sized penalty makes sense. But I don’t understand the claim that this concept is usually applied in the context of looking at how things grow. Occam’s razor is (apparently) formulated in terms of raw Kolmogorov complexity—the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
I agree that these intuitions can exist, but if I’m going to use them, then I detest this process being called a formalization! If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form “programming languages that generate priors that work tend to have characteristic X” can be transformed into wisdom of the form “priors that work tend to have characteristic X”.
Just an intuition pump: [...]
I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn’t seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.
(Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Can I ask which related concepts you mean?
[...] so it is the complexity of the outside universe.
Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation!
I find it difficult to imagine how such argument could even be constructed. “Our universe isn’t a simulation because it has property X” doesn’t explain why the simulator could not simulate X. The usual argument is “because quantum stuff, the simulation would require insane amounts of computing power”, which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has… maybe what’s an insane amount for us is peanuts for them.
But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument.
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don’t even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god’s perspective humans are even less relevant than monkeys are for Christianity. Maybe we are just meant to be food for the space locusts from Andromeda galaxy. Maybe our entire universe is a simulation on a computer in some alien universe with insane computer power, but they don’t care about the intelligent beings or life in general; they just use the flashing galaxies as a screen saver when they are bored. If you put things into perspective, assigning probability 1/1000 to any specific religion is way too much; all kinds of religions, existing and fictional put together, don’t deserve that much.
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. Someone who is at least as reasonable and good as any of us, which should not be an impossibly high standard. Such god certainly wouldn’t spend an eternity torturing random people for “crimes” such as being generally decent people but believing in a wrong religion or no religion at all, or having extramarital sex, etc. (Some theologians would say that this is actually their god. I don’t think so, but whatever.) I don’t really believe in such god either, honestly, but it is a good fallback plan when you can’t get rid of the idea of gods completely.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
That would be cheating, obviously. Unless by the length of code you mean also the length of all used libraries, in which case it is okay. It is assumed that the original programming language does not favoritize any specific theory, just provides very basic capabilities for expressing things like “2+2” or “if A then B else B”. (Abstracting from details such as how many pluses are in one “if”.)
If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors?
Yeah, ok. The meaning of all this is, how can we compare “complexity” of two things where neither is a strict subset of the other. The important part is that “fewer words” does not necessarily mean “smaller complexity”, because it allows obvious cheating (invent a new word that means exactly what you are arguing for, and then insist that your theory is really simple because it could be described by one word—the same trick as with importing a library), but even if it is not your intention to cheat, your theory can still benefit from some concepts having shorter words for historical reasons, or even because words related to humans (or life on Earth in general) are already shorter, so you should account for the complexity that is already included for them. Furthermore it should be obvious to you that “1000 different laws of physics” is more complex than “1 law of physics, applied in the same way to 1000 particles”. If this all is obvious to you, than yes, making the analogy to a programming language does not bring any extra value.
But historical evidence shows that humans are quite bad at this. They will insist that stars are just shining dots in the sky instead of distant solar systems, because “one solar system + thousands of shining dots” seems to them less complex than “thousands of solar systems”. They will insist that Milky Way or Andromeda cannot be composed of stars, because “thousand stars + one Milky Way + one Andromeda” seems to them less complex than “millions of stars”. More recently (and more controversially), they will insist that “quantum physics + collapse + classical physics” is less complex than “quantum physics all the way up”. Programming analogy helps to express this in a simple way: “Complexity is about the size of code, not how large values are stored in the variables.”
Can I ask which related [to Kolmogorov complexity] concepts you mean?
Compression (lossless), like in the “zip” files. Specifically the fact that an average random file cannotbe compressed, no matter how smart is the method used. Like, for any given value N, the number of all files with size N is larger than the number of all files with size smaller than N. So, whatever is your compression method, if it is lossless, it needs to be a bijection, so there must be at least one file of size N that it will not compress into a file of smaller size. Even worse, for each file of size N compressed to a smaller size, there must be a file of size smaller than N that gets “compressed” to a file of size N or more.
So how does compression work in real world? (Because experience shows it does.) It ultimately exploits the fact that most of the files we want to compress are not random, so it is designed to compress non-random files into smaller ones, and random (containing only noise) files into slightly larger ones. Like, you can try it at home; generate a one megabyte file full of randomly generated bits, then try all compression programs you have installed, and see what happens.
Now, each specific compression algorithm recognized and exploits some kinds of regularity, and is blind towards others. This is why people sometimes invent better compression algorithms that exploit more regularities. The question is, what is the asymptote of this progress? If you tried to invent the best compression algorithm ever, one that could exploit literally all kinds of non-randomness, what would it look like?
(You may want to think about this for a moment, before reading more.)
The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file. There is only one problem with that: finding the shortest possible program for every input is algorithmically impossible—this is related to halting problem. Regardless, if the file is compressed by this hypothetical ultimate compression algorithm, the size of the compressed file would be the Kolmogorov complexity of the original file.
But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; [...]
I see. In that case, I think we’re reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian—you can often say things like “if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I’ll just eat that cost; I need groceries!”. My state of uncertainty is that I’ve barely put five minutes of thought into the question “I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long.”
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
Well, that’s another reference to “popular” theism. Popular theism is a subset of theism in general, which itself is a subset of “worlds in which there’s something I should be doing that has infinite importance”.
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
This advice makes sense, though given the state of uncertainty described above, I would say I’m already on it.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. [...]
This is a good fallback plan for the contingency in which I can’t figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
[...] your theory can still benefit from some concepts having shorter words for historical reasons [...]
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
But historical evidence shows that humans are quite bad at this.
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a “reasonable” set of priors.
I would still hesitate to call it a “formalism”, though IIRC I don’t think you’ve used that word. In my re-listen of the sequences, I’ve just gotten to the part where Eliezer uses that word. Well, I guess I’ll take it up with somebody who calls it that.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
[...] The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file.
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn’t guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
> But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Well, if the “inside/outside the universe” distinction is going to mean “is/isn’t causally connected to the universe at all” and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn’t too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I’d be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that “I’ll know it when I see it” (give me two options, and I’ll probably tell you whether one of them seems “simpler” than the other), but I wouldn’t try to put it into exact words.
Congratulations—noticing that you are confused is an important step!
What are you doing while “awaiting additional evidence”? This is a topic that doesn’t have a neutral/agnostic position—biology forces you to eat, you have some influence (depending on your willpower model) over what and how much.
This belief wasn’t really affecting my eating habits, so I don’t think I’ll be changing much. My rules are basically:
No meat (I’m a vegetarian for moral reasons).
If I feel hungry but I can see/feel my stomach being full by looking at / touching my belly, I’m probably just bored or thirsty and I should consider not eating anything.
Try to eat at least a meal’s worth of “light” food (like toast or cereal as opposed to pizza or nachos) per day. This last rule is just to keep me from getting stomach aches, which happens if I eat too much “heavy” food in too short a time span.
I think I might contend that this kind of reflects an agnostic position. But I’m glad you asked, because I hadn’t noticed before that rule 2 actually does implicitly assume some relationship between “amount of food” and “weight change”, and is put in place so I don’t gain weight. So I guess I should really have said that what I tossed out the window was the extra detail that calories alone determine the effect food will have on one’s weight. I still believe, for normal cases, that taking the same eating pattern but scaling it up (eating more of everything but keeping the ratios the same) will result in weight gain.
This is a model I came up with in middle school to explain why it felt like I was treated differently from others even when I acted the same. I invented it long before I fully understood what models were (which only occurred sometime in the last year) and as such it’s something of a “baby’s first model” (ha ha) for me. As you’d expect for something authored by a middle schooler regarding their problems, it places minimal blame on myself. However, even nowadays I think there’s some truth to it.
Here’s the model. Your reputation is a ball on a hill. The valley on one side of the hill corresponds to being revered, and the valley on the other side corresponds to being despised. The ball begins on top of the hill. If you do something that others see as “good” then the ball gets nudged to the good side, and if you do something that others see as “bad” then it gets nudged to the other side.
Here’s where the hill comes in. Once your reputation has been nudged one way or the other, it begins to affect how others interpret your actions. If you apologize for something you did wrong and your reputation is positive, you’re “being the bigger person and owning up to your mistakes”; if you do the same when your reputation is negative, you’re “trying to cover your ass”. Once your action has been interpreted according to your current reputation, it is then fed back into the calculation as an update: the rep/+ person who apologized gets a boost, and the rep/- person who apologized gets shoved down even further.
Hence, “once the ball is sufficiently far down the hill, it begins to roll on its own”. You can take nothing but neutral actions and your reputation will become a more extreme version of what it already is (assuming it was far-from-center to begin with). This applies to positive reputation as well as negative! I have had the experience of my reputation rolling down the positive side of the hill—it was great.
There are also other factors that can affect the starting position of the ball, e.g. if you’re attractive or if somebody gives you a positively-phrased introduction then you start on the positive side, but if you’re ugly or if your current audience has heard bad rumors about you then you start on the negative side.
I’d be curious if anyone else has had this experience and feels this is an accurate model, and I’d be very curious if anyone thinks there is a significant hole in it.
This very much matches my own model. Once you are high or low status, it’s self reinforcing and people will interpret the evidence to support the existing story, which is why when you are high you can play low and you won’t lose status (you’re just “slumming it” or something similar) and when you are low you can play high and will not gain any status (you’re “reaching above your station).
We used to talk about a “halo effect” here (and sometimes, “negative halo effect”), I like this way of describing it.
I think it might be more valuable to just prefer to use a general model of confirmation bias though. People find whatever they’re looking for. They only find the truth if they’re really really looking for the truth, whatever it’s going to be, and nothing else, and most people aren’t, and that captures most of what is happening.
as such it’s something of a “baby’s first model” (ha ha) for me. As you’d expect for something authored by a middle schooler regarding their problems, it places minimal blame on myself.
Heh, I like this sentence a lot (both for being funny, sort of adorable, and also just actually being a useful epistemic status)
This model certainly seems relevant, but should probably be properly seen as one particular lens, or a facet of a much more complicated equation. (In particular, people can have different kinds of reputation in different domains)
(In particular, people can have different kinds of reputation in different domains)
That’s true. I didn’t notice this as I was writing, but my entire post frames “reputation” as being representable as a number. I think this might have been more or less true for the situations I had in mind, all of which were non-work social groups with no particular aim.
Here’s another thought. For other types of reputations that can still be modeled as a ball on a hill, it might be useful to parameterize the slope on each side of the hill.
“Social reputation” (the vague stuff that I think I was perceiving in the situations that inspired this model) is one where the rep/+ side is pretty shallow, but the rep/- side is pretty steep. It’s not too hard to screw up and lose a good standing — in particular, if the social group gets it in their head they you were “faking it” and that you’re “not actually a good/kind/confident/funny person” — but once you’re down the well, it’s very hard to climb out.
“Academic reputation”, on the other hand, seems like it might be the reverse. I can imagine that if someone is considered a genius, and then they miss the mark on a few problems in a row, it wouldn’t do much to their standing, whereas if the local idiot suddenly pops out and solves an outstanding problem, everyone might change their minds about them. (This is based on minimal experience.)
Of course, it also depends on the group.
I’m curious — do you have any types of reputation in mind that you wouldn’t model like this, or any particular extra parts that you would add to it?
When you estimate how much mental energy a task will take, you are just as vulnerable to the planning fallacy as when you estimate how much time it will take.
I’m told that there was a period of history where only the priests were literate and therefore only they could read the Bible. Or maybe it was written in Latin and only they knew how to read it, or something. Anyway, as a result, they were free to interpret it any way they liked, and they used that power to control the masses.
Goodness me, it’s a good thing we Have Science Now and can use it to free ourselves from the overbearing grip of Religion!
Oh, totally unrelatedly, the average modern person is scientifically illiterate and absorbs their knowledge of what is “scientific” through a handful of big news sources and through cultural osmosis.
Moral: Be wary of packages labeled “science” and be especially wary of social pressure to believe implausible-sounding claims just because they’re “scientific”. There are many ways for that beautiful name to get glued onto random memes.
“Science confirms video games are good” is essentially the same statement as “The bible confirms video games are bad” just with the authority changed. Luckily there remains a closer link between the authroity “Science” and truth than the authority “The bible” and truth so it’s still an improvement.
Most people still update their worldview based upon whatever their tribe as agreed upon as their central authority. I’m having a hard time critisising people for doing this, however. This is something we all do! If I see Nick Bostrom writing something slightly crazy that I don’t fully understand, I will still give credence to his view simply for being an authority in my worldview.
I feel like my criticism of people blindly believing anything labeled “science” is essentially criticising people for not being smart enough to choose better authorities, but that’s a criticism that applies to everyone who doesn’t have the smartest authority (who just so happens to be Nick Bostrom, so we’re safe).
Maybe there’s a point to be made about not blindly trusting any authority, but I’m not smart enough to make that point, so I’ll default to someone who is.
Most people still update their worldview based upon whatever their tribe as agreed upon as their central authority. I’m having a hard time critisising people for doing this, however. This is something we all do!
Oh yes, that’s certainly true! My point is that anybody who has the floor can say that science has proven XYZ when it hasn’t, and if their audience isn’t scientifically literate then they won’t be able to notice. That’s why I lead with the Dark Ages example where priests got to interpret the bible however was convenient for them.
I just saw a funny example of Extremal Goodhart in the wild: a child was having their picture taken, and kept being told they weren’t smiling enough. As a result, they kept screaming “CHEEEESE!!!” louder and louder.
If the laundry needs to be done, put in a load of laundry. If the world needs to be saved, save the world. If you want pizza for dinner, go preheat the oven.
When you ask a question to a crowd, the answers you get back have a statistical bias towards overconfidence, because people with higher confidence in their answers are more likely to respond.
From my personal wiki. Seems appropriate for LessWrong.
The End-product Substitution is a hypothesis proposed by me about my behavior when choosing projects to work on. The hypothesis is that when I am evaluating how much I would like to work on a project, I substitude judgment of how much I will enjoy the end product for judgment of how much I will enjoy the process of creating it. For example, I recently [Sep 2019] considered creating a series of videos mirroring the content of the LessWrong sequences, and found myself fawning over how nice it would be to have created such a series of videos, and not thinking at all about how I would go about creating them, let alone how much I would enjoy doing that.
I just learned a (rationalist) lesson. I’m taking a course that has some homework that’s hosted on a third party site. There was one assignment at the beginning of the semester, a few weeks ago. Then, about a week ago, I was wondering to myself whether there would be any more assignments any time soon. In fact, I even wondered if I had somehow missed a few assignments, since I’d thought they’d be assigned more frequently.
Well, I checked my course’s website (different from the site where the homework was hosted) and didn’t see any mention of assignments. Then I went to the professor’s website, and saw that they said they didn’t assign any “formal homework”. Finally, I thought back to the in-class discussions, where the third-party homework was never mentioned.
“Ah, good,” I thought. “I guess I haven’t missed any assignments, and none are coming up any time soon either.”
Then, today, the third-party homework was actually mentioned in class, so just now I went to look at the third-party website. I have missed three assignments, and there is another one due on Sunday.
I am not judged by the quality of my reasoning. I am judged by what actually happens, as are we all.
In retrospect (read: “beware that hindsight bias might be responsible for this paragraph”) I kind of feel like I wasn’t putting my all into figuring out if I was missing any assignments, and was instead just nervously trying to convince myself that I wasn’t. Obviously, I would rather have had that unpleasant experience earlier and missed fewer assignments—aka, if I was missing assignments, then I should have wanted to believe that I was missing assignments.
Yup. And they key thing that I’m reminding myself of is that this can’t be achieved by convincing myself that there aren’t any assignments to miss. It can only be achieved for sure by knowing whether there are assignments or not.
I’ve been thinking of signing up for cryonics recently. The main hurdle is that it seems like it’ll be kind of complicated, since at the moment I’m still on my parent’s insurance, and I don’t really know how all this stuff works. I’ve been worrying that the ugh field surrounding the task might end up being my cause of death by causing me to look on cryonics less favorably just because I subconsciously want to avoid even thinking about what a hassle it will be.
But then I realized that I can get around the problem by pre-committing to sign up for cryonics no matter what, then just cancelling it if I decide I don’t want it.
It will be MUCH easier to make an unbiased decision if choosing cryonics means doing nothing rather than meaning that I have to go do a bunch of complicated paperwork now. It will be well worth a few months (or even years) of dues.
I’m a conlang enthusiast, and specifically I study loglangs, which are a branch of conlangs that are based around predicate logic. My motivation for learning these languages was that I was always bothered by all the strange irregularities in my natural language (like the simple past tense being the same as the past participle, and the word inflammable meaning two opposite things).
Learning languages like these has only drawn my attention to even more natural-language nonsense. Occasionally I explain this to conlang lay-people, and maybe 50% of them are surprised to find that English is irregular. Some of them even deny that it is, and state that it all follows a perfectly normal pattern. This is a perpetual annoyance to me, simply because I spend so much time immersed in this stuff that I’ve forgotten how hard it is to spot from scratch.
Well, a while ago I wanted to start learning Mandarin from a friend of mine who speaks it as their first language. While introducing the language, they said that things like tenses were expressed as separate words (“did eat”) rather than sometimes-irregular modifications of existing words (“ate”). This reminded me of loglangs, so I gave them the spiel that I gave in the two previous paragraphs—natlangs, irregularities, annoyances, etc.
“Huh,” said the friend. They then turned to another native Chinese speaker and asked “Does Chinese have anything like that?”
I said, “I guarantee it does.”
This was months ago. Just now I was reflecting on it, and I realized that I have almost no evidence whatsoever that Chinese isn’t perfectly regular (or close enough that the thrust of my claim would be wrong).
It’s clear to me now that my thought process was something like “Well, just yet another conlang outsider who’s stunned and amazed to find that natural languages have problems.” That brought to mind all the other times when I’d encountered people surprised to find that their mother tongue (almost always English) had irregularities, and the erroneous conclusion precipitated right out.
You may also be integrating something you’ve read and then forgotten you read, and this added weight to your visible-and-suspect though process in order to make a true statement. It would not surprise me to learn that at least some of your study has included examples of irregularity from MANY natural languages, including Chinese. So “I guarantee it does” may be coming from multiple places in your knowledge.
So, was it actually incorrect, or just illegibly-justified?
Hmm, good question. I guess I wouldn’t be surprised to learn that I’d read about Chinese having irregularities, though the main text I’ve read about this (The Complete Lojban Language) didn’t mention any IIRC.
I wouldn’t be surprised if Chinese had no irregularities in the tense system – it’s a very isolating language. But here’s one irregularity: the negation of 有 is 没有 (“to not have/possess”), but the simple negation of every other verb is 不 + verb. You can negate other verbs with 没, but then it’s implied to be 没有 + verb, which makes the verb into something like a present participle. E.g., 没吃 = “to have not eaten”.
Also as a side note, I’m curious what’s actually in the paywalled posts. Surely people didn’t write a bunch of really high-quality content just for an April Fools’ day joke?
I would appreciate an option to hide the number of votes that posts have. Maybe not hide entirely, but set them to only display at the bottom of a post, and not at the top nor on the front page. With the way votes are currently displayed, I think I’m getting biased for/against certain posts before I even read them, just based on the number of votes they have.
Yeah, this was originally known as “Anti-Kibitzer” on the old LessWrong. It isn’t something we prioritized, but I think greaterwrong has an implementation of it. Though it would also be pretty easy to create a stylish script for it (this hides it on the frontpage, and makes the color white on the post-page, requiring you to select the text to see the score):
The other day, my roommate mentioned that the bias towards wanting good things for people in your in-group and bad things for those in your out-group can be addressed by including ever more people in your in-group.
Here’s a way to do that: take a person you want to move into your in-group, and try to imagine them as the protagonist of a story. What are their desires? What obstacles are they facing right now? How are they trying to overcome them?
I sometimes feel annoyed at a person just by looking at them. I invented this technique just now, but I used it one time on a person pictured in an advertisement, and it worked. I had previously been having a “what’s your problem?” feeling, and it was instantly replaced with a loving “I’m rooting for you” feeling.
Idea: “Ugh-field trades”, where people trade away their obligations that they’ve developed ugh-fields for in exchange for other people’s obligations. Both people get fresh non-ugh-fielded tasks. Works only in cases where the task can be done by somebody else, which won’t be every time but might be often enough for this to work.
Interesting thought. Unfortunately, most tasks where I’m blocked/delayed by an ugh field either dissolve it as soon as I identify it, or include as part of the ugh that only I can do it.
I’m currently working on a text document full of equations that use variables with extremely long names. I’m in the process of simplifying it by renaming the variables. For complicated reasons, I have to do this by hand.
Just now, I noticed that there’s a series of variables O1-O16, and another series of variables F17-F25. For technical reasons relating to the work I’m doing, I’m very confident that the name switch is arbitrary and that I can safely rename the F’s to O’s without changing the meaning of the equations.
But I’m doing this by hand. If I’m wrong, I will potentially was a lot of work by (1) making this change (2) making a bunch of other changes (3) realizing I was wrong (4) undoing all the other changes (5) undoing this change (6) re-doing all the changes that came after it.
And for a moment, this spurred me to become less confident about the arbitrariness of the naming convention!
The correct thought would have been “I’m quite confident about this, but seeing as the stakes are high if I’m wrong and I can always do this later, it’s still not worth it to make the changes now.”
The problem here was that I was conflating “X is very likely true” with “I must do the thing I would do if X was certain”. I knew instinctively that making the changes now was a bad idea, and then I incorrectly reasoned that it was because it was likely to go wrong. It’s actually unlikely to go wrong, it’s just that if it does go wrong, it’s a huge inconvenience.
Epistemic status: really shaky, but I think there’s something here.
I naturally feel a lot of resistance to the way culture/norm differences are characterized in posts like Ask and Guess and Wait vs Interrupt Culture. I naturally want to give them little pet names, like:
Guess culture = “read my fucking mind, you badwrong idiot” culture.
Ask culture = nothing, because this is just how normal, non-insane people act.
I think this feeling is generated by various negative experiences I’ve had with people around me, who, no matter where I am, always seem to share between them one culture or another that I don’t really understand the rules of. This leads to a lot of interactions where I’m being told by everyone around me that I’m being a jerk, even when I can “clearly see” that their is nothing I could have done that would have been correct in their eyes, or that what they wanted me to do was impossible or unreasonable.
But I’m starting to wonder if I need to let go of this. When I feel someone is treating me unfairly, it could just be because (1) they are speaking in Culture 1, then (2) I am listening in Culture 2 and hearing something they don’t mean to transmit. If I was more tuned in to what people meant to say, my perception of people who use other norms might change.
I feel there’s at least one more important pair of cultures, and although I haven’t mentioned it yet, it’s the one I had in mind most while writing this post. Something like:
Culture 1: Everyone speaks for themselves only, unless explicitly stated otherwise. Putting words in someone’s mouth or saying that they are “implying” something they didn’t literally say is completely unacceptable. False accusations are taken seriously and reflect poorly on the accuser.
Culture 2: The things you say reflect not only on you but also on people “associated” with you. If X is what you believe, you might have to say Y instead if saying X could be taken the wrong way. If someone is being a jerk, you don’t have to extend the courtesy of articulating their mistake to them correctly; you can just shun them off in whatever way is easiest.
I don’t really know how real this dichotomy is, and if it is real, I don’t know for sure how I feel about one being “right” and the other being “wrong”. I tried semi-hard to give a neutral take on the distinction, but I don’t think I succeeded. Can people reading this tell which culture I naturally feel opposed to? Do you think I’ve correctly put my finger on another real dichotomy? Which set of norms, if either, do you feel more in tune with?
I think this feeling is generated by various negative experiences I’ve had with people around me, who, no matter where I am, always seem to share between them one culture or another that I don’t really understand the rules of. This leads to a lot of interactions where I’m being told by everyone around me that I’m being a jerk, even when I can “clearly see” that their is nothing I could have done that would have been correct in their eyes, or that what they wanted me to do was impossible or unreasonable.
Is it because they’re expecting you to read their mind, and go along with their “culture”, instead of asking you?
it (the negative experiences) - Are *they (the negative experiences) the result of (people with a “culture” who’s rules rules you don’t understand) expecting you to read *their mind, and go along with their “culture”, instead of asking you to go along with their culture?
Aha, no, the mind reading part is just one of several cultures I’m mentioning. (Guess Culture, to be exact.) If I default to being an Asker but somebody else is a Guesser, I might have the following interaction with them:
Me: [looking at some cookies they just made] These look delicious! Would it be all right if I ate one?
Them: [obviously uncomfortable] Uhm… uh… I mean, I guess so...
Here, it’s retroactively clear that, in their eyes, I’ve overstepped a boundary just by asking. But I usually can’t tell in advance what things I’m allowed to ask and what things I’m not allowed to ask. There could be some rule that I just haven’t discovered yet, but because I haven’t discovered it yet, it feels to me like each case is arbitrary, and thus it feels like I’m being required to read people’s minds each time. Hence why I’m tempted to call Guess Culture as “Read-my-mind Culture”.
(Contrast this to Ask Culture, where the rule is, to me, very simple and easy to discover: every request is acceptable to make, and if the other person doesn’t want you to do what you’re asking to do, they just say “no”.)
It might be hard to take a normative stance, but if culture 1 makes you feel better AND leads to better results AND helps people individuate and makes adults out of them, then maybe it’s just, y’know, better. Not “better” in the naive mistake-theorist assumption that there is such a thing as a moral truth, but “better” in the correct conflict-theorist assumption that it just suits you and me and we will exert our power to make it more widely adopted, for the sake of us and our enlightened ideals.
When somebody is advocating taking an action, I think it can be productive to ask “Is there a good reason to do that?” rather than “Why should we do that?” because the former phrasing explicitly allows for the possibility that there is no good reason, which I think makes it both intellectually easier to realize that and socially easier to say it.
I just noticed that I’ve got two similarity clusters in my mind that keep getting called to my attention by wording dichotomies like high-priority and low-priority, but that would themselves be better labeled as big and small. This was causing me to interpret phrases like “doing a string of low-priority tasks” as having a positive affect (!) because what it called to mind was my own activity of doing a string of small, on-average medium-priority tasks.
My thought process might improve overall if I toss out the “big” and “small” similarity clusters and replace them with clusters that really are centered around “high-priority” and “low-priority”.
It’s happened again: I’ve realized that one of my old beliefs (pre-LW) is just plain dumb.
I used to look around at all the various diet (Paleo, Keto, low carb, low fat, etc.) and feel angry at people for having such low epistemic standards. Like, there’s a new theory of nutrition every two years, and people still put faith in them every time? Everybody swears by a different diet and this is common knowledge, but people still swear by diets? And the reasoning is that “fat” (the nutrient) has the same name as “fat” (the body part people are trying to get rid of)?
Then I encountered the “calories in = calories out” theory, which says that the only thing you need to do to lose weight is to make sure that you burn more calories than you eat.
And I thought to myself, “yeah, obviously.”.
Because, you see, if the orthodox asserts X and the heterodox asserts Y, and the orthodox is dumb, then Y must be true!
Anyway, I hadn’t thought about this belief in a while, but I randomly remembered it a few minutes ago, and as soon as I remembered its origins, I chucked it out the window.
Oops!
(PS: I wouldn’t be flabbergasted if the belief turned out true anyway. But I’ve reverted my map from the “I know how the world is” state to the “I’m awaiting additional evidence” state.)
On one hand, yeah, obviously.
On the other hand, “burning calories” is not an elementary action. Suppose I tell you to burn 500 calories now; how exactly would you achieve it? If your answer is that you would exercise or do something physically demanding, such actions spend ATP in the cells, so what if you don’t have enough ATP in your muscle cells; what is your plan B for burning calories? From the opposite side, you can limit the intake of calories. What if your metabolism is such that if you don’t provide enough calories, you will gradually fall in coma and die.
Your metabolism can make it impossible for you to reduce the “calories in” or increase the “calories out”, if it is somehow set up in a way that does not convert the calories into useful energy in your muscle cells, and it starts damaging your organs if the calories are missing in general. So while the theory is almost tautologically true, it may still be impossible to use it to lose weight. And the problem is that the proponents of “calories in = calories out” usually smugly pretend that it is an actionable advice, instead of mere description.
The actionable advice needs to be about how the metabolism works. And the things that impact it, such as what you eat, and who knows what else. Also, if you have some hormonal imbalance, or alergy, or whatever, your metabolism may differ from other people even if you eat the same things and try to live the same lifestyle. So, while e.g. eating less refined sugar would probably help everyone, no advice would guarantee a perfect outcome for everyone.
You make a good point—even if my belief was technically true, it could still have been poorly framed and inactionable (is there a name for this failure mode?).
But in fact, I think it’s not even obvious that it was technically true. If we say “calories in” is the sum of the calorie counts on the labels of each food item you eat (let’s assume the labels are accurate) then could there not still be some nutrient X that needs to be present for your body to extract the calories? Say, you need at least an ounce of X to process 100 calories? If so, then one could eat the same amount of food, but less X, and potentially lose weight.
Or perhaps the human body can only process food between four and eight hours after eating it, and it doesn’t try as hard to extract calories if you aren’t being active, so scheduling your meals to take place four hours before you sit around doing nothing would make them “count less”.
Calories are (presumably?) a measure of chemical potential energy, but remember that matter itself can also be converted into energy. There’s no antimatter engine inside my gut, so my body fails to extract all of the energy present in each piece of food. Couldn’t the mechanism of digestion also fail to extract all the chemical potential energy of species “calorie”?
Yes, there are many steps in the metabolism that are not under your conscious control. I am not an expert, so I don’t want to speculate too much about the technical details, but I think that gut bacteria probably also play a role. Simply, not everything you put in your mouth ends up necessarily in your bloodstream, and not everything that you absorbed is necessarily available in form of muscle energy.
I don’t know any standard name. Seems to me the problem is confusing “rephrasing of the desired outcome” with “an algorithm that actually get you there”. Something like:
Q: How can I get lot of money?
A: Become a millionaire!
Like, yeah, technically, everyone who successfully followed this advice ended up with lots of money, and everyone who didn’t can be accused of not following the advice properly, but that’s simply because those are two ways to describe the same thing.
Q: How can I lose weight?
A: Get rid of the extra atoms! I mean, extra calories!
Charitably, the advice is not absolutely bad, because for a hypothetical completely clueless listener it would provide some little information. But then, using this advice in practice means insinuating that your target is completely clueless, which is probably not be the case.
But atoms aren’t similar to calories, are they? I maintain that this hypothesis could be literally false, rather than simply unhelpful.
Okay, it’s not the same. But the idea is that the answer is equally unhelpful, for similar reasons.
I want to give a big thumbs up of positive reinforcement. I thinks its great that I got to read an “oops! That was dumb, but now I’ve changed my mind.”
Thanks for helping to normalize this.
Thanks for the feedback! Here’s another one for ya. A relatively long time ago I used to be pretty concerned about Pascal’s wager, but then I devised some clever reasoning why it all cancels out and I don’t need to think about it. I reasoned that one of three things must be true:
I don’t have an immortal soul. In this case, I might as well be a good person.
I have an immortal soul, and after my bodily death I will be assigned to one of a handful of infinite fates, depending on how good of a person I was. In this case it’s very important that I be a good person.
Same as above, but the decision process is something else. In this case I have no way of knowing how my infinite fate will be decided, so I might as well be a good person during my mortal life and hope for the best.
But then, post-LW, I realized that there are two issues with this:
It doesn’t make any sense to separate out case 2 from the enormous ocean of possibilities allowed for by case 3. Or rather, I can separate it, but then I need to probabilistically penalize it relative to case 3, and I also need to slightly shift the “expected judgment criterion” found in case 3 away from “being a good person is the way to get a good infinite fate”, and it all balances out.
More importantly, this argument flippantly supposes that I have no way of discerning what process, if any, will be used to assign me an infinite fate. An infinite fate, mind you. I ought to be putting in more thought than this even if I thought the afterlife only lasted an hour, let alone eternity.
So now I am back to being rather concerned about Pascal’s wager, or more generally, the possibility that I have an immortal soul and need to worry about where it eventually ends up.
From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn’t quite catch it the first time around. So my first line of attack is to read through the sequences again, more carefully this time, and see if they really do give a valid reason to believe that.
I think the first step is to ask yourself what do you even mean by saying “god”.
Because if you have a definition like “the spirit of our dead chieftain, who sometimes visits me in my dreams”, I have no problem with that. Like, I find it completely plausible that you see the image of the chieftain in your dreams; nothing unscientific about that. It’s just that the image is generated by your brain, so if try to communicate with it, it can only give you advice that your brain generated, and it can only grant your wishes in the sense that you accomplish that yourself. But if you agree that this is exactly what you meant, then such god is perfectly okay for me.
But the modern definition is more like: an intelligent being that exists outside of our universe, but can observe it and change it. Suspending disbelief for a moment; how exactly can a being be “intelligent”, whether inside our universe or not? Intelligence implies processing data. That would imply that the god has some… supernatural neurons, or some other architecture capable of processing data. So the god is a mechanism (in the sense that a human is a biological mechanism) composed of parts, although those parts do not necessarily have to be found in our physics. Skill kinda plausible, maybe gods are made out of dark matter, who knows. But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.
Then the theologists try to come with some smart-sounding arguments, like “god is actually supremely simple”—therefore Occam’s razor does not disprove him, and no evolution was needed, because simple things are more likely a priori, so a supreme being is supremely likely. Or something like that. But this is nonsense, because “supremely simple” and “capable of processing information” are incompatible.
The last argument would be: okay, we do not have a coherent explanation of god, but you don’t have a coherent explanation how the universe can exist without one. And a potential answer is that maybe existence is relative. Like, if you run a simulation of Conway’s Game of Life, it could possibly contain evolved intelligent life, and that life would ask “how was our universe created”? And it’s not like you created the universe by simulating it, because you are merely following the mathematical rules; so it’s more like the math created that universe and you are only observing it. If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math. But does it exist? Well, that rules of math describe the intelligent beings inside that universe that live, think, interact with their world. Maybe that’s all that is needed for existence; or rather, the question is what else could be needed? And then, maybe our world is also just an implication of some mathematical equations.
tl;dr—maybe math itself implies existence, and if you try to turn the concept of (intelligent, preceiving, acting) god into something coherent it is no longer the god of religion but merely a powerful alien
The supernatural isn’t supposed to be the natural done all over again. The typical theological claim is that God’s wisdom or whatever is an intrinisic quality, not something with moving parts.
Well, “wisdom as an intrinsic quality” is a mysterious answer. And what is “wisdom without moving parts”? A precomputed table of wise answers to all possible questions? Who precomputed it and how?
I agree that this is how theology usually answers it, but it is an answer that doesn’t make any sense when you look at it closer; it’s just some good-sounding words randomly glued together. And if you try to make it refer to something, even a hypothetical something, the whole explanation falls apart.
Reductionism is a combination of three claims.
That many thing are made of smaller components
That those particular can be understood in terms of the operations of their components
There’s an irreduciby basic level. Its not turtles all way down.
If it’s always the case that something that isn’t explicable in terms of its parts is mysterious, then the lowest level us mysterious. If nothing is mysterious, if you apply the argument against mysterious answer without prejudice, reductionism is false. There isn’t a consistent set of principles here.
Continued ..
Naturalism is the claim that there is a bunch of fundamental properties that just are, at the bottom of the stack ,and everything is built up from that. Supernaturalism is the claim that the intrinsic stuff is at the top of the stack, and everything else is derived from it top-down. That may be 100% false , but it is the actual claim.
There’s a thing called the principle of charity , where you one party interprets the others statements so as to maximise their truth value. This only enhances communication if the truth is not basically in dispute...that’s the easy case. The hard case is when there is a basic dispute about whats true. In that case, it’s not helpful to fix the other person’s claims by making them more reasonable from your point of view.
Anyway, thats how we ended up with “God must have superneurons in his superbrain”.
Feels like in the top-down universe, science shouldn’t work at all. I mean, when you take a magnifying glass and look at the details, they are supposedly generated on the fly to fit the larger picture. Then you apply munchkinry to the details and invent an atomic bomb or quantum computer… which means… what exactly, from the top-down perspective?
Yeah, you can find an excuse, e.g. that some of those top-down principles are hidden like Easter eggs, waiting to be discovered later. That the Platonic idea of smartphones has been waiting for us since the creation of the universe, but was only revealed to the recent generation. Which would mean that the top-down universe has some reason to pretend to be bottom-up, at least in some aspects...
Okay, the same argument could be made that quantum physics pretends to be classical physics at larger scale, or relativity pretends to be Newtonian mechanics at low speeds… as if the scientists are trying to make up silly excuses for why their latest magic works but totally “doesn’t contradict” what the previous generations of scientists were telling us...
Well, at least it seems like the bottom-up approach is fruitful, whether the true reason is that the universe is bottom-up, or that the universe it top-down in a way that tries really hard to pretend that it is actually bottom-up (either in the sense that when it generates the—inherently meaningless—details for us, it makes sure that all consequences of those details are compatible with the preexisting Platonic ideas that govern the universe… or like a Dungeon Master who allows the players to invent all kinds of crazy stuff and throw the entire game off balance, because he values consistency above everything).
More importantly, in universe where there is magic all the way up, what sense does it make to adopt the essentially half-assed approach, where you believe in the supernatural but also kinda use logic except not too seriously… might as well throw the logic away completely, because in that kind of universe it is not going to do you much good anyway.
The basic claim of a top down universe is a short string that doesn’t contain much information. About the same amount as a basic claim of reductionism.
The top down claim doesnt imply a universe of immutable physical law, but it doesn’t contradict it either.
The same goes for the bottom-up claim. A universe of randomly moving high entropy gas is useless for science and technology, but compatible with reductionism.
But all this is rather beside the point. Even if supernaturalism is indefensible, you can’t refute it by changing it into something else.
I wouldn’t call the dead chieftain a god—that would just be a word game.
Wait wait! You say a god-like being created by evolution cannot be a creator of the universe. But that’s only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful “alien” in another universe, who itself came about from an evolutionary process in its own universe.
It might be “omniscient” in the sense that it can think 1000x as fast as us and has 1000x as much working memory and is familiar with thinking habits that are 1000x as good as ours, but that’s a moot point. The real thing I’m worried about isn’t whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
I haven’t yet put much thought into this, since I only recently came to believe that this topic merits serious thought, but the existence of such a powerful creature seems like a plausible avenue to the conclusion “I have an infinite fate and it depends on me doing/avoiding X”.
This is another area where my understanding could stand to be improved (and where I expect it will be during my next read-through of the sequences). I’m not sure exactly what kind of simplicity Occam’s razor uses. Apparently it can be formalized as Kolmogorov complexity, but the only definition I’ve ever found for that term is “the Kolmogorov Complexity of X is the length of the shortest computer program that would output X”. But this definition is itself in need of formalization. Which programming language? And what if X is something other than a stream of bits, such as a dandelion? And even once that’s answered, I’m not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
(All that being said, I’d like to note that I’m keeping in mind that just because I don’t understand these things doesn’t mean there’s nothing to them. Do you know of any good learning resources for someone who has my confusions about these topics?)
That much makes sense, but I think it excludes a possibly important class of universe that is based on math but also depends on a constant stream of data from an outside source. Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
Some people in history did, specifically, ancient Romans. But now we don’t. Just making it obvious.
And this is similar to the chieftain thing. You can have a religion that defines “god” as “an alien from another universe”, but many religions insist that god is not created but eternal.
Yes, this is a practical approach. Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
There is no reason to assume that the existing religions have something in common with the simulating alien. When I play Civilization, the “people” in my simulation have a religion, but it doesn’t mean that I believe in it, or even approve it, or that I am somehow going to reward them for it.
It’s just a cosmic horror that you need to learn to live with. There are more.
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
But I agree that if you talk about small data, it is underspecified. I don’t really know what could it mean to “have a universe defined by the following three bits: 0, 0, 1”, and maybe no one has a meaningful answer to this. But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
Just an intuition pump: Imagine that there is a multiverse A containing all universes that can be run by programs having exactly 100 lines of code; and there is a multiverse B containing all universes that can be run by programs having exactly 105 lines of code. The universes from A are more likely, because they also appear in B.
For each program A1 describing a universe from A, you have a set of programs describing exactly the same universe in B, simply by adding an “exit” instruction on line 101, and arbitrary instructions on lines 102-105. If we take a program B1 describing a universe from B in such way that each of the 105 lines is used meaningfully, then… in multiverse A we have one A1 vs zero B1, and in multiverse B we have many A1-equivalents vs one B1; in either case A1 wins.
A part of this is in standard computer science curriculum, and another part is a philosophical extrapolation. I do not have a recommendation about a specific textbook, I just vaguely remember things like this from university. (Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Then I would say the Kolmogorov complexity of the “Life-only” universe is the complexity of the rules of Life; but the complexity of the “Life+inputs” universe is the complexity of the rules plus the complexity of whoever generates the inputs, and everything they depend on, so it is the complexity of the outside universe.
The Civ analogy makes sense, and I certainly wouldn’t stop at disproving all actually-practiced religions (though at the moment I don’t even feel equipped to do that).
Are you sure it’s logically possible in the strict sense? Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation! (Of course, there’s a lot of question-untangling that has to happen first, like whether “is this a simulation?” is even an appropriate question to ask. See also: Greg Egan’s book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
(WARNING: graphic imagery) …upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).
Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn’t still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?
The constant-sized penalty makes sense. But I don’t understand the claim that this concept is usually applied in the context of looking at how things grow. Occam’s razor is (apparently) formulated in terms of raw Kolmogorov complexity—the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
I agree that these intuitions can exist, but if I’m going to use them, then I detest this process being called a formalization! If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form “programming languages that generate priors that work tend to have characteristic X” can be transformed into wisdom of the form “priors that work tend to have characteristic X”.
I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn’t seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.
Can I ask which related concepts you mean?
Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
I find it difficult to imagine how such argument could even be constructed. “Our universe isn’t a simulation because it has property X” doesn’t explain why the simulator could not simulate X. The usual argument is “because quantum stuff, the simulation would require insane amounts of computing power”, which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has… maybe what’s an insane amount for us is peanuts for them.
But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument.
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don’t even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god’s perspective humans are even less relevant than monkeys are for Christianity. Maybe we are just meant to be food for the space locusts from Andromeda galaxy. Maybe our entire universe is a simulation on a computer in some alien universe with insane computer power, but they don’t care about the intelligent beings or life in general; they just use the flashing galaxies as a screen saver when they are bored. If you put things into perspective, assigning probability 1/1000 to any specific religion is way too much; all kinds of religions, existing and fictional put together, don’t deserve that much.
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. Someone who is at least as reasonable and good as any of us, which should not be an impossibly high standard. Such god certainly wouldn’t spend an eternity torturing random people for “crimes” such as being generally decent people but believing in a wrong religion or no religion at all, or having extramarital sex, etc. (Some theologians would say that this is actually their god. I don’t think so, but whatever.) I don’t really believe in such god either, honestly, but it is a good fallback plan when you can’t get rid of the idea of gods completely.
That would be cheating, obviously. Unless by the length of code you mean also the length of all used libraries, in which case it is okay. It is assumed that the original programming language does not favoritize any specific theory, just provides very basic capabilities for expressing things like “2+2” or “if A then B else B”. (Abstracting from details such as how many pluses are in one “if”.)
Yeah, ok. The meaning of all this is, how can we compare “complexity” of two things where neither is a strict subset of the other. The important part is that “fewer words” does not necessarily mean “smaller complexity”, because it allows obvious cheating (invent a new word that means exactly what you are arguing for, and then insist that your theory is really simple because it could be described by one word—the same trick as with importing a library), but even if it is not your intention to cheat, your theory can still benefit from some concepts having shorter words for historical reasons, or even because words related to humans (or life on Earth in general) are already shorter, so you should account for the complexity that is already included for them. Furthermore it should be obvious to you that “1000 different laws of physics” is more complex than “1 law of physics, applied in the same way to 1000 particles”. If this all is obvious to you, than yes, making the analogy to a programming language does not bring any extra value.
But historical evidence shows that humans are quite bad at this. They will insist that stars are just shining dots in the sky instead of distant solar systems, because “one solar system + thousands of shining dots” seems to them less complex than “thousands of solar systems”. They will insist that Milky Way or Andromeda cannot be composed of stars, because “thousand stars + one Milky Way + one Andromeda” seems to them less complex than “millions of stars”. More recently (and more controversially), they will insist that “quantum physics + collapse + classical physics” is less complex than “quantum physics all the way up”. Programming analogy helps to express this in a simple way: “Complexity is about the size of code, not how large values are stored in the variables.”
Compression (lossless), like in the “zip” files. Specifically the fact that an average random file cannot be compressed, no matter how smart is the method used. Like, for any given value N, the number of all files with size N is larger than the number of all files with size smaller than N. So, whatever is your compression method, if it is lossless, it needs to be a bijection, so there must be at least one file of size N that it will not compress into a file of smaller size. Even worse, for each file of size N compressed to a smaller size, there must be a file of size smaller than N that gets “compressed” to a file of size N or more.
So how does compression work in real world? (Because experience shows it does.) It ultimately exploits the fact that most of the files we want to compress are not random, so it is designed to compress non-random files into smaller ones, and random (containing only noise) files into slightly larger ones. Like, you can try it at home; generate a one megabyte file full of randomly generated bits, then try all compression programs you have installed, and see what happens.
Now, each specific compression algorithm recognized and exploits some kinds of regularity, and is blind towards others. This is why people sometimes invent better compression algorithms that exploit more regularities. The question is, what is the asymptote of this progress? If you tried to invent the best compression algorithm ever, one that could exploit literally all kinds of non-randomness, what would it look like?
(You may want to think about this for a moment, before reading more.)
The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file. There is only one problem with that: finding the shortest possible program for every input is algorithmically impossible—this is related to halting problem. Regardless, if the file is compressed by this hypothetical ultimate compression algorithm, the size of the compressed file would be the Kolmogorov complexity of the original file.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
I see. In that case, I think we’re reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian—you can often say things like “if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I’ll just eat that cost; I need groceries!”. My state of uncertainty is that I’ve barely put five minutes of thought into the question “I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long.”
Well, that’s another reference to “popular” theism. Popular theism is a subset of theism in general, which itself is a subset of “worlds in which there’s something I should be doing that has infinite importance”.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
This advice makes sense, though given the state of uncertainty described above, I would say I’m already on it.
This is a good fallback plan for the contingency in which I can’t figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a “reasonable” set of priors.
I would still hesitate to call it a “formalism”, though IIRC I don’t think you’ve used that word. In my re-listen of the sequences, I’ve just gotten to the part where Eliezer uses that word. Well, I guess I’ll take it up with somebody who calls it that.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn’t guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
Well, if the “inside/outside the universe” distinction is going to mean “is/isn’t causally connected to the universe at all” and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn’t too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I’d be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that “I’ll know it when I see it” (give me two options, and I’ll probably tell you whether one of them seems “simpler” than the other), but I wouldn’t try to put it into exact words.
Kk! Thanks for the discussion :)
Congratulations—noticing that you are confused is an important step!
What are you doing while “awaiting additional evidence”? This is a topic that doesn’t have a neutral/agnostic position—biology forces you to eat, you have some influence (depending on your willpower model) over what and how much.
This belief wasn’t really affecting my eating habits, so I don’t think I’ll be changing much. My rules are basically:
No meat (I’m a vegetarian for moral reasons).
If I feel hungry but I can see/feel my stomach being full by looking at / touching my belly, I’m probably just bored or thirsty and I should consider not eating anything.
Try to eat at least a meal’s worth of “light” food (like toast or cereal as opposed to pizza or nachos) per day. This last rule is just to keep me from getting stomach aches, which happens if I eat too much “heavy” food in too short a time span.
I think I might contend that this kind of reflects an agnostic position. But I’m glad you asked, because I hadn’t noticed before that rule 2 actually does implicitly assume some relationship between “amount of food” and “weight change”, and is put in place so I don’t gain weight. So I guess I should really have said that what I tossed out the window was the extra detail that calories alone determine the effect food will have on one’s weight. I still believe, for normal cases, that taking the same eating pattern but scaling it up (eating more of everything but keeping the ratios the same) will result in weight gain.
The ball-on-a-hill model of reputation
This is a model I came up with in middle school to explain why it felt like I was treated differently from others even when I acted the same. I invented it long before I fully understood what models were (which only occurred sometime in the last year) and as such it’s something of a “baby’s first model” (ha ha) for me. As you’d expect for something authored by a middle schooler regarding their problems, it places minimal blame on myself. However, even nowadays I think there’s some truth to it.
Here’s the model. Your reputation is a ball on a hill. The valley on one side of the hill corresponds to being revered, and the valley on the other side corresponds to being despised. The ball begins on top of the hill. If you do something that others see as “good” then the ball gets nudged to the good side, and if you do something that others see as “bad” then it gets nudged to the other side.
Here’s where the hill comes in. Once your reputation has been nudged one way or the other, it begins to affect how others interpret your actions. If you apologize for something you did wrong and your reputation is positive, you’re “being the bigger person and owning up to your mistakes”; if you do the same when your reputation is negative, you’re “trying to cover your ass”. Once your action has been interpreted according to your current reputation, it is then fed back into the calculation as an update: the rep/+ person who apologized gets a boost, and the rep/- person who apologized gets shoved down even further.
Hence, “once the ball is sufficiently far down the hill, it begins to roll on its own”. You can take nothing but neutral actions and your reputation will become a more extreme version of what it already is (assuming it was far-from-center to begin with). This applies to positive reputation as well as negative! I have had the experience of my reputation rolling down the positive side of the hill—it was great.
There are also other factors that can affect the starting position of the ball, e.g. if you’re attractive or if somebody gives you a positively-phrased introduction then you start on the positive side, but if you’re ugly or if your current audience has heard bad rumors about you then you start on the negative side.
I’d be curious if anyone else has had this experience and feels this is an accurate model, and I’d be very curious if anyone thinks there is a significant hole in it.
This very much matches my own model. Once you are high or low status, it’s self reinforcing and people will interpret the evidence to support the existing story, which is why when you are high you can play low and you won’t lose status (you’re just “slumming it” or something similar) and when you are low you can play high and will not gain any status (you’re “reaching above your station).
We used to talk about a “halo effect” here (and sometimes, “negative halo effect”), I like this way of describing it.
I think it might be more valuable to just prefer to use a general model of confirmation bias though. People find whatever they’re looking for. They only find the truth if they’re really really looking for the truth, whatever it’s going to be, and nothing else, and most people aren’t, and that captures most of what is happening.
Heh, I like this sentence a lot (both for being funny, sort of adorable, and also just actually being a useful epistemic status)
This model certainly seems relevant, but should probably be properly seen as one particular lens, or a facet of a much more complicated equation. (In particular, people can have different kinds of reputation in different domains)
That’s true. I didn’t notice this as I was writing, but my entire post frames “reputation” as being representable as a number. I think this might have been more or less true for the situations I had in mind, all of which were non-work social groups with no particular aim.
Here’s another thought. For other types of reputations that can still be modeled as a ball on a hill, it might be useful to parameterize the slope on each side of the hill.
“Social reputation” (the vague stuff that I think I was perceiving in the situations that inspired this model) is one where the rep/+ side is pretty shallow, but the rep/- side is pretty steep. It’s not too hard to screw up and lose a good standing — in particular, if the social group gets it in their head they you were “faking it” and that you’re “not actually a good/kind/confident/funny person” — but once you’re down the well, it’s very hard to climb out.
“Academic reputation”, on the other hand, seems like it might be the reverse. I can imagine that if someone is considered a genius, and then they miss the mark on a few problems in a row, it wouldn’t do much to their standing, whereas if the local idiot suddenly pops out and solves an outstanding problem, everyone might change their minds about them. (This is based on minimal experience.)
Of course, it also depends on the group.
I’m curious — do you have any types of reputation in mind that you wouldn’t model like this, or any particular extra parts that you would add to it?
When you estimate how much mental energy a task will take, you are just as vulnerable to the planning fallacy as when you estimate how much time it will take.
I’m told that there was a period of history where only the priests were literate and therefore only they could read the Bible. Or maybe it was written in Latin and only they knew how to read it, or something. Anyway, as a result, they were free to interpret it any way they liked, and they used that power to control the masses.
Goodness me, it’s a good thing we Have Science Now and can use it to free ourselves from the overbearing grip of Religion!
Oh, totally unrelatedly, the average modern person is scientifically illiterate and absorbs their knowledge of what is “scientific” through a handful of big news sources and through cultural osmosis.
Hmm.
Moral: Be wary of packages labeled “science” and be especially wary of social pressure to believe implausible-sounding claims just because they’re “scientific”. There are many ways for that beautiful name to get glued onto random memes.
“Science confirms video games are good” is essentially the same statement as “The bible confirms video games are bad” just with the authority changed. Luckily there remains a closer link between the authroity “Science” and truth than the authority “The bible” and truth so it’s still an improvement.
Most people still update their worldview based upon whatever their tribe as agreed upon as their central authority. I’m having a hard time critisising people for doing this, however. This is something we all do! If I see Nick Bostrom writing something slightly crazy that I don’t fully understand, I will still give credence to his view simply for being an authority in my worldview.
I feel like my criticism of people blindly believing anything labeled “science” is essentially criticising people for not being smart enough to choose better authorities, but that’s a criticism that applies to everyone who doesn’t have the smartest authority (who just so happens to be Nick Bostrom, so we’re safe).
Maybe there’s a point to be made about not blindly trusting any authority, but I’m not smart enough to make that point, so I’ll default to someone who is.
Oh yes, that’s certainly true! My point is that anybody who has the floor can say that science has proven XYZ when it hasn’t, and if their audience isn’t scientifically literate then they won’t be able to notice. That’s why I lead with the Dark Ages example where priests got to interpret the bible however was convenient for them.
I just saw a funny example of Extremal Goodhart in the wild: a child was having their picture taken, and kept being told they weren’t smiling enough. As a result, they kept screaming “CHEEEESE!!!” louder and louder.
A koan:
If the laundry needs to be done, put in a load of laundry.
If the world needs to be saved, save the world.
If you want pizza for dinner, go preheat the oven.
When you ask a question to a crowd, the answers you get back have a statistical bias towards overconfidence, because people with higher confidence in their answers are more likely to respond.
From my personal wiki. Seems appropriate for LessWrong.
The End-product Substitution is a hypothesis proposed by me about my behavior when choosing projects to work on. The hypothesis is that when I am evaluating how much I would like to work on a project, I substitude judgment of how much I will enjoy the end product for judgment of how much I will enjoy the process of creating it. For example, I recently [Sep 2019] considered creating a series of videos mirroring the content of the LessWrong sequences, and found myself fawning over how nice it would be to have created such a series of videos, and not thinking at all about how I would go about creating them, let alone how much I would enjoy doing that.
I just learned a (rationalist) lesson. I’m taking a course that has some homework that’s hosted on a third party site. There was one assignment at the beginning of the semester, a few weeks ago. Then, about a week ago, I was wondering to myself whether there would be any more assignments any time soon. In fact, I even wondered if I had somehow missed a few assignments, since I’d thought they’d be assigned more frequently.
Well, I checked my course’s website (different from the site where the homework was hosted) and didn’t see any mention of assignments. Then I went to the professor’s website, and saw that they said they didn’t assign any “formal homework”. Finally, I thought back to the in-class discussions, where the third-party homework was never mentioned.
“Ah, good,” I thought. “I guess I haven’t missed any assignments, and none are coming up any time soon either.”
Then, today, the third-party homework was actually mentioned in class, so just now I went to look at the third-party website. I have missed three assignments, and there is another one due on Sunday.
I am not judged by the quality of my reasoning. I am judged by what actually happens, as are we all.
In retrospect (read: “beware that hindsight bias might be responsible for this paragraph”) I kind of feel like I wasn’t putting my all into figuring out if I was missing any assignments, and was instead just nervously trying to convince myself that I wasn’t. Obviously, I would rather have had that unpleasant experience earlier and missed fewer assignments—aka, if I was missing assignments, then I should have wanted to believe that I was missing assignments.
Oops.
Congrats on saying oops!
How likely is it now that you are going to miss any more assignments? Not likely at all!
Yup. And they key thing that I’m reminding myself of is that this can’t be achieved by convincing myself that there aren’t any assignments to miss. It can only be achieved for sure by knowing whether there are assignments or not.
I’ve been thinking of signing up for cryonics recently. The main hurdle is that it seems like it’ll be kind of complicated, since at the moment I’m still on my parent’s insurance, and I don’t really know how all this stuff works. I’ve been worrying that the ugh field surrounding the task might end up being my cause of death by causing me to look on cryonics less favorably just because I subconsciously want to avoid even thinking about what a hassle it will be.
But then I realized that I can get around the problem by pre-committing to sign up for cryonics no matter what, then just cancelling it if I decide I don’t want it.
It will be MUCH easier to make an unbiased decision if choosing cryonics means doing nothing rather than meaning that I have to go do a bunch of complicated paperwork now. It will be well worth a few months (or even years) of dues.
I just caught myself substituting judgment of representativeness for judgment of probability.
I’m a conlang enthusiast, and specifically I study loglangs, which are a branch of conlangs that are based around predicate logic. My motivation for learning these languages was that I was always bothered by all the strange irregularities in my natural language (like the simple past tense being the same as the past participle, and the word inflammable meaning two opposite things).
Learning languages like these has only drawn my attention to even more natural-language nonsense. Occasionally I explain this to conlang lay-people, and maybe 50% of them are surprised to find that English is irregular. Some of them even deny that it is, and state that it all follows a perfectly normal pattern. This is a perpetual annoyance to me, simply because I spend so much time immersed in this stuff that I’ve forgotten how hard it is to spot from scratch.
Well, a while ago I wanted to start learning Mandarin from a friend of mine who speaks it as their first language. While introducing the language, they said that things like tenses were expressed as separate words (“did eat”) rather than sometimes-irregular modifications of existing words (“ate”). This reminded me of loglangs, so I gave them the spiel that I gave in the two previous paragraphs—natlangs, irregularities, annoyances, etc.
“Huh,” said the friend. They then turned to another native Chinese speaker and asked “Does Chinese have anything like that?”
I said, “I guarantee it does.”
This was months ago. Just now I was reflecting on it, and I realized that I have almost no evidence whatsoever that Chinese isn’t perfectly regular (or close enough that the thrust of my claim would be wrong).
It’s clear to me now that my thought process was something like “Well, just yet another conlang outsider who’s stunned and amazed to find that natural languages have problems.” That brought to mind all the other times when I’d encountered people surprised to find that their mother tongue (almost always English) had irregularities, and the erroneous conclusion precipitated right out.
You may also be integrating something you’ve read and then forgotten you read, and this added weight to your visible-and-suspect though process in order to make a true statement. It would not surprise me to learn that at least some of your study has included examples of irregularity from MANY natural languages, including Chinese. So “I guarantee it does” may be coming from multiple places in your knowledge.
So, was it actually incorrect, or just illegibly-justified?
Hmm, good question. I guess I wouldn’t be surprised to learn that I’d read about Chinese having irregularities, though the main text I’ve read about this (The Complete Lojban Language) didn’t mention any IIRC.
I wouldn’t be surprised if Chinese had no irregularities in the tense system – it’s a very isolating language. But here’s one irregularity: the negation of 有 is 没有 (“to not have/possess”), but the simple negation of every other verb is 不 + verb. You can negate other verbs with 没, but then it’s implied to be 没有 + verb, which makes the verb into something like a present participle. E.g., 没吃 = “to have not eaten”.
I was 100%, completely, unreservedly fooled by this year’s April Fools’ joke. Hilarious XDD
Also as a side note, I’m curious what’s actually in the paywalled posts. Surely people didn’t write a bunch of really high-quality content just for an April Fools’ day joke?
I would appreciate an option to hide the number of votes that posts have. Maybe not hide entirely, but set them to only display at the bottom of a post, and not at the top nor on the front page. With the way votes are currently displayed, I think I’m getting biased for/against certain posts before I even read them, just based on the number of votes they have.
Yeah, this was originally known as “Anti-Kibitzer” on the old LessWrong. It isn’t something we prioritized, but I think greaterwrong has an implementation of it. Though it would also be pretty easy to create a stylish script for it (this hides it on the frontpage, and makes the color white on the post-page, requiring you to select the text to see the score):
https://userstyles.org/styles/175379/lesswrong-anti-kibitzer
Oh, good idea! I don’t have Stylish installed, but I have something similar, and I was able to hide it that way. Thanks!
Can you share it?
Sure. The Firefox plugin is Custom Style Sheet and the code is as follows:
Thanks!
Presumably you’d prefer them not to appear in post-list-items as well? (i.e. on the frontpage?)
Right:
:)
ah, whoops.
The other day, my roommate mentioned that the bias towards wanting good things for people in your in-group and bad things for those in your out-group can be addressed by including ever more people in your in-group.
Here’s a way to do that: take a person you want to move into your in-group, and try to imagine them as the protagonist of a story. What are their desires? What obstacles are they facing right now? How are they trying to overcome them?
I sometimes feel annoyed at a person just by looking at them. I invented this technique just now, but I used it one time on a person pictured in an advertisement, and it worked. I had previously been having a “what’s your problem?” feeling, and it was instantly replaced with a loving “I’m rooting for you” feeling.
Why is it my responsibility to heal the wounds that somebody else dealt to me??
Because if you don’t heal your wounds, you will bleed on people who didn’t cut you.
Alternatively: Because you’re the one that hurts if you don’t.
(I see this has been posted elsewhere. I don’t know if I invented it independently or if I read it somewhere and then forgot about it until now.)
Idea: “Ugh-field trades”, where people trade away their obligations that they’ve developed ugh-fields for in exchange for other people’s obligations. Both people get fresh non-ugh-fielded tasks. Works only in cases where the task can be done by somebody else, which won’t be every time but might be often enough for this to work.
Interesting thought. Unfortunately, most tasks where I’m blocked/delayed by an ugh field either dissolve it as soon as I identify it, or include as part of the ugh that only I can do it.
I just caught myself committing a bucket error.
I’m currently working on a text document full of equations that use variables with extremely long names. I’m in the process of simplifying it by renaming the variables. For complicated reasons, I have to do this by hand.
Just now, I noticed that there’s a series of variables O1-O16, and another series of variables F17-F25. For technical reasons relating to the work I’m doing, I’m very confident that the name switch is arbitrary and that I can safely rename the F’s to O’s without changing the meaning of the equations.
But I’m doing this by hand. If I’m wrong, I will potentially was a lot of work by (1) making this change (2) making a bunch of other changes (3) realizing I was wrong (4) undoing all the other changes (5) undoing this change (6) re-doing all the changes that came after it.
And for a moment, this spurred me to become less confident about the arbitrariness of the naming convention!
The correct thought would have been “I’m quite confident about this, but seeing as the stakes are high if I’m wrong and I can always do this later, it’s still not worth it to make the changes now.”
The problem here was that I was conflating “X is very likely true” with “I must do the thing I would do if X was certain”. I knew instinctively that making the changes now was a bad idea, and then I incorrectly reasoned that it was because it was likely to go wrong. It’s actually unlikely to go wrong, it’s just that if it does go wrong, it’s a huge inconvenience.
Whoops.
Epistemic status: really shaky, but I think there’s something here.
I naturally feel a lot of resistance to the way culture/norm differences are characterized in posts like Ask and Guess and Wait vs Interrupt Culture. I naturally want to give them little pet names, like:
Guess culture = “read my fucking mind, you badwrong idiot” culture.
Ask culture = nothing, because this is just how normal, non-insane people act.
I think this feeling is generated by various negative experiences I’ve had with people around me, who, no matter where I am, always seem to share between them one culture or another that I don’t really understand the rules of. This leads to a lot of interactions where I’m being told by everyone around me that I’m being a jerk, even when I can “clearly see” that their is nothing I could have done that would have been correct in their eyes, or that what they wanted me to do was impossible or unreasonable.
But I’m starting to wonder if I need to let go of this. When I feel someone is treating me unfairly, it could just be because (1) they are speaking in Culture 1, then (2) I am listening in Culture 2 and hearing something they don’t mean to transmit. If I was more tuned in to what people meant to say, my perception of people who use other norms might change.
I feel there’s at least one more important pair of cultures, and although I haven’t mentioned it yet, it’s the one I had in mind most while writing this post. Something like:
Culture 1: Everyone speaks for themselves only, unless explicitly stated otherwise. Putting words in someone’s mouth or saying that they are “implying” something they didn’t literally say is completely unacceptable. False accusations are taken seriously and reflect poorly on the accuser.
Culture 2: The things you say reflect not only on you but also on people “associated” with you. If X is what you believe, you might have to say Y instead if saying X could be taken the wrong way. If someone is being a jerk, you don’t have to extend the courtesy of articulating their mistake to them correctly; you can just shun them off in whatever way is easiest.
I don’t really know how real this dichotomy is, and if it is real, I don’t know for sure how I feel about one being “right” and the other being “wrong”. I tried semi-hard to give a neutral take on the distinction, but I don’t think I succeeded. Can people reading this tell which culture I naturally feel opposed to? Do you think I’ve correctly put my finger on another real dichotomy? Which set of norms, if either, do you feel more in tune with?
Is it because they’re expecting you to read their mind, and go along with their “culture”, instead of asking you?
I couldn’t parse this question. Which part are you referring to by “it”, and what do you mean by “instead of asking you”?
it (the negative experiences) - Are *they (the negative experiences) the result of (people with a “culture” who’s rules rules you don’t understand) expecting you to read *their mind, and go along with their “culture”, instead of asking you to go along with their culture?
Aha, no, the mind reading part is just one of several cultures I’m mentioning. (Guess Culture, to be exact.) If I default to being an Asker but somebody else is a Guesser, I might have the following interaction with them:
Me: [looking at some cookies they just made] These look delicious! Would it be all right if I ate one?
Them: [obviously uncomfortable] Uhm… uh… I mean, I guess so...
Here, it’s retroactively clear that, in their eyes, I’ve overstepped a boundary just by asking. But I usually can’t tell in advance what things I’m allowed to ask and what things I’m not allowed to ask. There could be some rule that I just haven’t discovered yet, but because I haven’t discovered it yet, it feels to me like each case is arbitrary, and thus it feels like I’m being required to read people’s minds each time. Hence why I’m tempted to call Guess Culture as “Read-my-mind Culture”.
(Contrast this to Ask Culture, where the rule is, to me, very simple and easy to discover: every request is acceptable to make, and if the other person doesn’t want you to do what you’re asking to do, they just say “no”.)
It might be hard to take a normative stance, but if culture 1 makes you feel better AND leads to better results AND helps people individuate and makes adults out of them, then maybe it’s just, y’know, better. Not “better” in the naive mistake-theorist assumption that there is such a thing as a moral truth, but “better” in the correct conflict-theorist assumption that it just suits you and me and we will exert our power to make it more widely adopted, for the sake of us and our enlightened ideals.
When somebody is advocating taking an action, I think it can be productive to ask “Is there a good reason to do that?” rather than “Why should we do that?” because the former phrasing explicitly allows for the possibility that there is no good reason, which I think makes it both intellectually easier to realize that and socially easier to say it.
I just noticed that I’ve got two similarity clusters in my mind that keep getting called to my attention by wording dichotomies like high-priority and low-priority, but that would themselves be better labeled as big and small. This was causing me to interpret phrases like “doing a string of low-priority tasks” as having a positive affect (!) because what it called to mind was my own activity of doing a string of small, on-average medium-priority tasks.
My thought process might improve overall if I toss out the “big” and “small” similarity clusters and replace them with clusters that really are centered around “high-priority” and “low-priority”.