What about your past self? If Night Guy can predict what Morning Guy will do, Morning Guy is effectively threatening his past self.
humpolec
But… but… Light actually won, didn’t he? At least in the short run—he managed to defeat L. I was always under the impression that some of these “mistakes” were committed by Light deliberately in order to lure L.
Is there an analogous experiment for Tegmark’s multiverse?
You set up an experiment so that you survive only if some outcome, anticipated by your highly improbable theory of physics, is true.
Then you wake up in a world which is with high probability governed by your theory.
If I understand correctly, under MW you anticipate the experience of surviving with probability 1, and under C with probability 0.5. I don’t think that’s justified.
In both cases the probability should be either conditional on “being there to experience anything” (and equal 1), OR unconditional (equal the “external” probability of survival, 0.5). This is something in between. You take the external probability in C, but condition on the surviving branches in MW.
To go with the TV series analogy proposed by Eliezer, maybe it could be an end of Season 1?
It adds a “friend” CSS class to your friend’s username everywhere, so you can add an user style or some other hack to highlight it. There is probably a reason LessWrong doesn’t do it by default, though.
I have no familiarity with Reddit/Lesswrong codebase, but isn’t this (r2/r2/models/subreddit.py) the only relevant place?
elif self == Subreddit._by_name(g.default_sr) and user.safe_karma >= g.karma_to_post:
So it’s a matter of changing that
g.karma_to_post
(which apparently is a global configuration variable) into a subreddit’s option (like the ones defines on top of the file).(And, of course, applying that change to the database, which I have no idea about, but this also shouldn’t be hard...)
ETA: Or, if I understand the code correctly, one could just change
elif self.type == 'public':
(a few lines above) toelif self.type == 'public' and user.safe_karma >= 1:
, but it’s a dirty hack.)
Oh, right. Somehow I was expecting it to be 40 and 0.4. Now it makes sense.
Something is wrong with the numbers here:
The probability that a randomly chosen man surived given that they were given treatment A is 40⁄100 = 0.2
There are some theories about continuation of subjective experience “after” objective death—quantum immortality, or extension of quantum immortality to Tegmark’s multiverse (see this Moravec’s essay). I’m not sure if taking them seriously is a good idea, though.
I imagine the “stress table” is just a threshold value, and dice roll result is unknown. This way, stress is weak evidence for lying.
6502 simulated—mind uploading for microprocessors
I considered the existence of Santa a definitive proof that the paranormal/magic exists and not everything in the world is in the domain of science (and was slightly puzzled that the adults don’t see it that way).
No conspiracies, but for a long time I’ve been very prone to wishful thinking. I’m not really sure if believing in Santa actually influenced that. I don’t remember finding out the truth as a big revelation, though—no influence on my worldview or on trust for my parents.
(I’ve been raised without religion.)
I could also imagine that there are no practically feasible approaches to AGI promising approaches to AGI
?
Is there a link to an online explanation of this? When are the consequences of breaking an oath worse than a destroyed world? What did “world” mean when he said it? Humans? Earth? Humans on Earth? Energy in the Multiverse?
Suppose someone comes to a rationalist Confessor and says: “You know, tomorrow I’m planning to wipe out the human species using this neat biotech concoction I cooked up in my lab.” What then? Should you break the seal of the confessional to save humanity?
It appears obvious to me that the issues here are just those of the one-shot Prisoner’s Dilemma, and I do not consider it obvious that you should defect on the one-shot PD if the other player cooperates in advance on the expectation that you will cooperate as well.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)
Sorry, now I have no idea what we’re talking about. If your experiment involves killing yourself after seeing the wrong string, this is close to the standard quantum suicide.
If not, I would have to see the probabilities to understand. My analysis is like this: P(I observe string S | MWI) = P(I observe string S | Copenhagen) = 2^-30, regardless of whether the string S is specified beforehand or not. MWI doesn’t mean that my next Everett branch must be S because I say so.
Either you condition the observation (of surviving 1000 attempts) on the observer existing, and you have 1 in both cases, or you don’t condition it on the observer and you have p^-1000 in both cases. You can’t have it both ways.
How do you even make a quantum coin with 1/googolplex chance?