you find some pretty ironic things when rereading 17-year-old blog posts, but this one takes the cake.
Zane
If you look over all possible worlds, then asking “did the coin come up Heads or Tails” as if there’s only one answer is incoherent. If you look over all possible worlds, there’s a ~100% chance the coin comes up as Heads in at least one world, and a ~100% chance the coin comes up as Tails in at least one world.
But from the perspective of a particular observer, the question they’re trying to answer is a question of indexical uncertainty—out of all the observers in their situation, how many of them are in Heads-worlds, and how many of them are in Tails-worlds? It’s true that there are equally as many Heads-worlds as Tails-worlds—but 2⁄3 of observers are in the latter worlds.
Or to put it another way—suppose you put 10 people in one house, and 20 people in another house. A given person should estimate a 1⁄3 chance that they’re in the first house—and the fact that 1 house is half of 2 houses is completely irrelevant. Why should this reasoning be any different just because we’re talking about possible universes rather than houses?
I think you’re overestimating the intended scope of this post. Eliezer’s argument involves multiple claims—A, we’ll create ASI; B, it won’t terminally value us; C, it will kill us. As such, people have many different arguments against it. This post is about addressing a specific “B doesn’t actually imply C” counterargument, so it’s not even discussing “B isn’t true in the first place” counterarguments.
While you’re quite right about numbers on the scale of billions or trillions, I don’t think it makes sense in the limit for the prior probability of X people existing in the world to fall faster than X grows in size.
Certain series of large numbers grow larger much faster than they grow in complexity. A program that returns 10^(10^(10^10)) takes fewer bits to specify (relative to most reasonable systems of specifying programs) than a program that returns 32758932523657923658936180532035892630581608956901628906849561908236520958326051861018956109328631298061259863298326379326013327851098368965026592086190862390125670192358031278018273063587236832763053870032004364702101004310417647840155719238569120561329853619283561298215693286953190539832693826325980569123856910536312892639082369382562039635910965389032698312569023865938615338298392306583192365981036198536932862390326919328369856390218365991836501590931685390659103658916392090356835906398269120625190856983206532903618936398561980569325698312650389253839527983752938579283589237325987329382571092301928* - even though 10^(10^(10^10)) is by far the larger number. And it only takes a linear increase in complexity to make it 10^(10^(10^(10^(10^(10^10))))) instead.
*I produced this number via keyboard-mashing; it’s not anything special.
Consider the proposition “A superpowered entity capable of creating unlimited numbers of people ran a program that output the result of a random program out of all possible programs (with their outputs rendered as integers), weighted by the complexity of those programs, and then created that many people.”
If this happened, the probability that their program outputs at least X would fall much slower than X rises, in the limit. The sum doesn’t converge at all; the expected number of people created would be literally infinite.
So as long as you assign greater than literally zero probability to that proposition—and there’s no such thing as zero probability—there must exist some number X such that you assign greater than 1/X probability to X people existing. In fact, there must exist some number X such that you assign greater than 1/X probability to X million people existing, or X billion, or so on.
(btw, I don’t think that the sort of SIA-based reasoning here is actually valid—but if it was, then yeah, it implies that there are infinite people.)
I’m kind of concerned about the ethics of someone signing a contract and then breaking it to anonymously report what’s going on (if that’s what your private source did). I think there’s value from people being able to trust each others’ promises about keeping secrets, and as much as I’m opposed to Anthropic’s activities, I’d nevertheless like to preserve a norm of not breaking promises.
Can you confirm or deny whether your private information comes from someone who was under a contract not to give you that private information? (I completely understand if the answer is no.)
By conservation of expected evidence, I take your failure to cite anything relevant as further confirmation of my views.
This is one of the best burns I’ve ever heard.
Had a dream last night in which I was having a conversation on LessWrong—unfortunately, I can’t remember most of the details of my dreams unless I deliberately concentrate on what happened as soon as I wake up, so I don’t know what the conversation was about.
But I do remember that I realized halfway through the conversation that I had been clicking on the wrong buttons—clicking “upvote” & “downvote” instead of “agree” and “disagree”, and vice versa. In my dream, the first and second pairs of buttons looked identical—both of them were just the < and > signs.
I suggested to the LW team that they put something to clarify which buttons were which—maybe write the words “upvote”, “downvote”, “agree”, and “disagree” above the buttons. They thought that putting the words there would look really ugly and clutter up the UI too much.
But when I woke up, it turned out that the actual site has a checkmark and an X for the second pair of buttons! And it also displays what each one means when you hover over it! So thanks for retroactively solving my problem, LW team!
Zane’s Shortform
Multiple points, really. I believe that this calculation is flawed in specific ways, but I also think that most calculations that attempt to estimate the relative odds of two events that were both very unlikely a priori will end up being off by a large amount. These two points are not entirely unrelated.
The specific problems that I noticed were:
The probabilities are not independent of each other, so they cannot be multiplied together directly. A bear flipping over your tent would almost always immediately be preceded by the bear scratching your tent, so updating on both events would just be double-counting evidence.
The probabilities do not appear to be conditional probabilities. P(A&B&C&D) doesn’t equal P(A)*P(B)*P(C)*P(D), it equals P(A)*P(B|A)*P(C|A&B)*P(D|A&B&C).
The “nonbear” hypothesis is lumping together several different hypotheses. P(A|notbear) & P(B|notbear) cannot be multiplied together to get P(A&B|notbear), because (among other reasons) there may be some types of notbears that are very likely to do A but very unlikely to do B, some that are very likely to do both, and so on. Once you’ve observed A, it should update you on what kind of notbear it could be, and thus change the probability it does B.
The “20% a bear would scratch my tent : 50% a notbear would” claim is incorrect for the reasons I mentioned above. If your tent would be scratched 50% of the time in the absence of a bear, and a bear would scratch it 20% of the time, then the chance it gets scratched if there is a bear is 1-(1-50%)(1-20%), or 60%. (Unless you’re postulating that bears always scare off anything else that might scratch the tent—which it seems Luke is indeed claiming.)
I disagree with several of the specific claims about the probabilities, such as “95% chance a bear would look exactly like a fucking bear inside my tent” and “1% chance a notbear would.”
And then the meta-problem: when you’re multiplying together more than two or three probabilities that you estimated, particularly small ones, errors in your ability to estimate them start to add up. Which is why I don’t think it’s usually worthwhile to try and estimate probabilities like this.
But you have a fair point about it being a good idea to practice explicit calculations, even if they’re too complicated to reliably get right in real life. So here’s how I might calculate it:
P(bear encounters you): 1%.
P(tent scratched | bear): 60%, for the reasons I said above… unless we take into account it scaring away other tent-scratching animals, in which case maybe 40%.
P(tent flipped over | bear & tent scratched): 20%, maybe? I think if the bear has already taken an interest in your tent, it’s more likely than usual to flip it over.
P(you see a bear-shaped object | bear & tent scratched & tent flipped over): Bears always look like bears. This is so close to 100% I wouldn’t even normally include it in the calculation, but let’s call it 99.99%.
P(you get eaten | bear & tent scratched & tent flipped over & you see a bear-shaped object): It’s already pretty been aggressive so far, so I’d say perhaps 5%.
On the other side, there are almost no objects for which the probability of it looking exactly like a bear isn’t infinitesimal; let’s only consider Bigfoot and serial-killer-who’s-a-furry for simplicity, then add them up.
P(Bigfoot exists): …hmm. I am not an expert on the matter, but let’s say 1%.
P(Bigfoot encounters you | Bigfoot exists): There can’t be that many Bigfoots (Bigfeet?) out there, or else people would have caught one. 0.01%.
P(tent scratched | Bigfoot): Bigfeet are probably more aggressive than bears, so 70%.
P(tent flipped over | Bigfoot): Again, Bigfeet are supposed to be pretty aggressive, so 50%.
P(you see a bear-shaped object | Bigfoot & tent scratched & tent flipped over): Bigfoot looks similar enough to a bear that you’ll almost certainly think he’s a bear. 99%.
P(you get eaten | Bigfoot & tent scratched & tent flipped over & you see a bear-shaped object): Again, Bigfeet aggressive, 30%.
Then for the furry cannibal one:
P(furry cannibal stalking this forest): 0.000001% (that’s one in a hundred million, if I got my zeroes right). I welcome you to prove me wrong on the matter by manually increasing the number of furry cannibals in a given forest.
P(furry cannibal encounters you | furry cannibal exists): How large of a forest is this? Well, he probably has his methods of locating prey, so let’s say 10%. Wait, why did I assume he’s a “he”? What gender is the typical furry cannibal? Probably a trans woman? Let’s name this furry cannibal Susan.
P(tent scratched | Susan): Probably not that high; she doesn’t want to wake you up too soon. 30%.
P(tent flipped over | Susan & tent scratched): She might just sneak in, but let’s say 90%.
P(you see a bear-shaped object | Susan & tent scratched & tent flipped over): She’s wearing a bear costume, as hypothesized; 99.99%.
P(you get eaten | Susan & tent scratched & tent flipped over & you see a bear-shaped object): Yes, of course this happens; this was her whole kink in the first place! 99%.
So for “bear,” we have 1%*40%*20%*99.99%*5% = 0.004%. For “Bigfoot,” we have 1%*0.01%*70%*50%*99%*30% = 0.00001%. For “Susan,” we have 0.000001%*10%*30%*90%*99.99%*99% = .000000027%. Looks like Bigfoot was so much more likely than Susan that we can pretty much just forget the Susan possibility altogether. It’s 0.004 to 0.00001, so 400 to 1 chance that you’re being eaten by a bear.
(Although I actually think you should be even more confident than 400 to 1 that it’s a bear rather than Bigfoot, and that I just was off by an order of magnitude for one reason or another, as happens when you’re doing these sorts of calculations. And if you ever actually observe all of these things, the most likely hypothesis is that you’re dreaming.)
You can just try to estimate the base rate of a bear attacking your tent and eating you, then estimate the base rate of a thing that looks identical to a bear attacking your tent and eating you, and compare them. Maybe one in a thousand tents get attacked by a bear, and 1% of those tent attacks end with the bear eating the person inside. The second probability is a lot harder to estimate, since it mostly involves off-model surprises like “Bigfoot is real” and “there is a serial killer in these woods wearing a bear suit,” but I’d have trouble seeing how it could be above one in a billion. (Unless we’re including possibilities like “this whole thing is just a dream”—which actually should be your main hypothesis.)
In general, when you’re dealing with very low or very high probabilities, I’d recommend you just try to use your intuition instead of trying to calculate everything out explicitly.* The main reason is this: if you estimate a probability as being 30% instead of 50%, it won’t usually affect the result of the calculation that much. On the other hand, if you estimate a probability as being 1/10^5 instead of 1/10^6, it can have an enormous impact on the end result. However, humans are a lot better at intuitively telling apart 30% from 50% than they are at telling apart 1/10^5 from 1/10^6.
If you try to do explicit calculations about probabilities that are pretty close to 1:1, you’ll probably get a pretty accurate result; if you try to do explicit calculations about probabilities that are several orders of magnitude away from each other, you’ll probably be off by at least one order of magnitude. In this case, you calculated that even if a person on a camping trip is being eaten by something that looks identical to a bear, there’s still about a 2.6% chance that it’s not a bear. When you get a result that ridiculous, it doesn’t mean there’s a nonbear eating you, it means you’re doing the math wrong.
*The situations in which you can get useful information from an explicit calculation on low probabilities are situations where you’re fine with being off by substantial multiplicative factors. Like, if you’re making a business decision where you’re only willing to accept a <5% chance of something happening, and you calculate that there’s only a one in a trillion chance, then it doesn’t actually matter whether you were off by a factor of a million to one. (Of course, you still do need to check that there’s no way you could be off by an even larger factor than that.)
It doesn’t matter how often the possum would have scratched it. If your tent would be scratched 50% of the time in the absence of a bear, and a bear would scratch it 20% of the time, then the chance it gets scratched if there is a bear is 1-(1-50%)(1-20%), or 60%. Unless you’re postulating that bears always scare off anything else that might scratch the tent.
Also, what about how some of these probabilities are entangled with each other? Your tent being flipped over will almost always involve your tent being scratched, so once we condition on the tent being flipped over, that screens off the evidence from the tent being scratched.
Also, only 95% chance a bear would look like a bear? And only 0.01% chance it would eat you?
Realistically, once we’ve seen a bear-shaped object scratch your tent, flip it over, and start eating you, you should be way more confident than 38 to 1 that you’re being eaten.
“20% a bear would scratch my tent : 50% a notbear would”
I think the chance that your tent gets scratched should be strictly higher if there’s a bear around?
Do you have any specific examples of what this new/rebooted organization would be doing?
It sounds odd to hear the “even if the stars should die in heaven” song with a different melody than I had imagined when reading it myself.
I would have liked to hear the Tracey Davis “from darkness to darkness” song, but I think that was canonically just a chant without a melody. (Although I imagined a melody for that as well.)
...why did someone promote this to a Frontpage post.
[SP] The Edge of Morality
If I’m understanding correctly, the argument here is:
A)
B)
C)
Therefore, .
First off, this seems to have an implicit assumption that .
I think this assumption is true for any functions f and g, but I’ve learned not to always trust my intuitions when it comes to limits and infinity; can anyone else confirm this is true?
Second, A seems to depend on the relative sizes of the infinities, so to speak. If j and k are large but finite numbers, then if and only if j is substantially greater than k; if k is close to or larger than j, it becomes much less than or greater than −1/12.
I’m not sure exactly how this works when it comes to infinities—does the infinity on the sum have to be larger than the infinity on the limit for this to hold? I’m pretty sure what I just said was nonsense; is there a non-nonsensical version?
In conclusion, I don’t know how infinities work and hope someone else does.
I think I could be a good fit as a writer, but I don’t have much in the way of writing experience I can show you. Do you have any examples of what someone at this position would be focusing on? I’m happy to write up a couple pieces to demonstrate my abilities.
The question, then, is whether a given person is just an outlier by coincidence, or whether the underlying causal mechanisms that created their personality actually are coming from some internal gender-variable being flipped. (The theory being, perhaps, that early-onset gender dysphoria is an intersex condition, to quote the immortal words of a certain tribute band.)
If it was just that biological females sometimes happened to have a couple traits that were masculine—and these traits seemed to be at random, and uncorrelated—then that wouldn’t imply anything beyond “well, every distribution has a couple outliers.” But when you see that lesbians—women who have the typically masculine trait of attraction to women—are also unusually likely to have other typically masculine traits—then that implies that there’s something else going on. Such as, some of them really do have “male brains” in some sense.
And there are so many different personality traits that are correlated with gender (at least 18, according to the test mentioned above, and probably many more that can’t be tested as easily) that it’s very unlikely someone would have an opposite-sex personality just by chance alone. That’s why I’d guess that a lot of the feminine “men” and masculine “women” really do have some sort of intersex condition where their gender-variable is flipped. (Although there are some cultural confounders too, like people unconsciously conforming to stereotypes about how gay people act.)
I completely agree that dividing everyone between “male” and “female” isn’t enough to capture all the nuance associated with gender, and would much prefer that we used more words than that. But if, as seems to often be expected by the world, we have to approximate all of someone’s character traits all with only a single binary label… then there are a lot of people for whom it’s more accurate to use the one that doesn’t match their sex.
He said it was him on Joe Rogan’s podcast.