I’ve just made an enrollment deposit at the University of Illinois at Urbana-Champaign, and I’m wondering if any other rationalists are going, and if so, would they be interested in sharing a dorm?
sebmathguy
Your link is messed up.
Perhaps instead of immediately giving up and concluding that it’s impossible to reason correctly with MWI, it would be better to take the born rule at face value as a predictor of subjective probability.
I would immediately download this iff it had a GUI.
The AI is a program. Running on a processor. With an instruction set. Reading the instructions from memory. These instructions are its programming. There is no room for acausal magic here. When the goals get modified, they are done so by a computer, running code.
Consider indicating that your post contains spoilers.
Got it. I was previously having difficulty making that belief pay rent.
I’ve also heard that for soldiers, seeing one more death or injury can be the tipping point into PTSD.
Am I missing something, or does this follow trivially from PTSD being binary and the set of possible body counts being the natural numbers?
I’m a new user with −1 karma who therefore can’t vote, so I’ll combat censorship bias like this:
Moderate programmer, correct
Yes
but due to hedonistic adaptation, you will come out no less unhappy.
Did you mean “no more unhappy.”?
Edit: Formatting of quote.
Yes. Woops.
Ok, this is a definition discrepancy. The or that I’m using is (A or B) <-> not( (not A) and (not B)).
Edit: I was wrong for a different reason.
If p + q = 1, then p(A or B) = 1. The equivalence statement about A and B that we’re updating can be stated as (A or B) iff (A and B). Since probability mass is conserved, it has to go somewhere, and everything but A and B have probability 0, it has to go to the only remaining proposition, which is g(p, q), resulting in g(p, q) = 1. Stating this as p+q was an attempt to find something from which to further generalize.
My first reaction to the second question is to consider the case in which p + q = 1. Then, the answer is clearly that g(p, q) = p + q. I suspect that this is incomplete, and that further relevant information needs to be specified for the answer to be well-defined.
There’s actually no need to settle for finite truncations of a decision agent. The unlosing decision function (on lotteries) can be defined in first-order logic, and your proof that there are finite approximations of a decision function is sufficient to use the compactness theorem to produce a full model.