Just a technical point, but it is not true that most of the probability mass of a hypothesis has to come from “the shortest claw”. You can have lots of longer claws which together have more probability mass than a shorter one. This is relevant to situations like quantum mechanics, where the claw first needs to extract you from an individual universe of the multiverse, and that costs a lot of bits (more than just describing your full sensory data would cost), but from an epistemological point of view there are many possible such universes that you might be a part of.
Dacyn
As I understood it, the whole point is that the buyer is proposing C as an alternative to A and B. Otherwise, there is no advantage to him downplaying how much he prefers A to B / pretending to prefer B to A.
Hmm, the fact that C and D are even on the table makes it seem less collaborative to me, even if you are only explicitly comparing A and B. But I guess it is kind of subjective.
It seems weird to me to call a buyer and seller’s values aligned just because they both prefer outcome A to outcome B, when the buyer prefers C > A > B > D and the seller prefers D > A > B > C, which are almost exactly misaligned. (Here A = sell at current price, B = don’t sell, C = sell at lower price, D = sell at higher price.)
Isn’t the fact that the buyer wants a lower price proof that the seller and buyer’s values aren’t aligned?
You’re right that “Experiencing is intrinsically valuable to humans”. But why does this mean humans are irrational? It just means that experience is a terminal value. But any set of terminal values is consistent with rationality.
Of course, from a pedagogical point of view it may be hard to explain why the “empty function” is actually a function.
When you multiply two prime numbers, the product will have at least two distinct prime factors: the two prime numbers being multiplied.
Technically, it is not true that the prime numbers being multiplied need to be distinct. For example, 2*2=4 is the product of two prime numbers, but it is not the product of two distinct prime numbers.
As a result, it is impossible to determine the sum of the largest and second largest prime numbers, since neither of these can be definitively identified.
This seems wrong: “neither can be definitively identified” makes it sound like they exist but just can’t be identified...
Safe primes area subset of Sophie Germain primes
Not true, e.g. 7 is safe but not Sophie Germain.
OK, that makes sense.
OK, that’s fair, I should have written down the precise formula rather than an approximation. My point though is that your statement
the expected value of X happening can be high when it happens a little (because you probably get the good effects and not the bad effects Y)
is wrong because a low probability of large bad effects can swamp a high probability of small good effects in expected value calculations.
Yeah, but the expected value would still be .
I don’t see why you say Sequential Proportional Approval Voting gives little incentive for strategic voting. If I am confident a candidate I support is going to be elected in the first round, it’s in my interest not to vote for them so that my votes for other candidates I support will count for more. Of course, if a lot of people think like this then a popular candidate could actually lose, so there is a bit of a brinksmanship dynamic going on here. I don’t think that is a good thing.
The definition of a derivative seems wrong. For example, suppose that for rational but for irrational . Then is not differentiable anywhere, but according to your definition it would have a derivative of 0 everywhere (since could be an infinitesimal consisting of a sequence of only rational numbers).
But if they are linearly independent, then they evolve independently, which means that any one of them, alone, could have been the whole thing—so why would we need to postulate the other worlds? And anyway, aren’t the worlds supposed to be interacting?
Can’t this be answered by an appeal to the fact that the initial state of the universe is supposed to be low-entropy? The wavefunction corresponding to one of the worlds, run back in time to the start of the universe, would have higher entropy than the wavefunction corresponding to all of them together, so it’s not as good a candidate for the starting wavefunction of the universe.
No, the whole premise of the face-reading scenario is that the agent can tell that his face is being read, and that’s why he pays the money. If the agent can’t tell whether his face is being read, then his correct action (under FDT) is to pay the money if and only if (probability of being read) times (utility of returning to civilization) is greater than (utility of the money). Now, if this condition holds but in fact the driver can’t read faces, then FDT does pay the $50, but this is just because it got unlucky, and we shouldn’t hold that against it.
In your new dilemma, FDT does not say to pay the $50. It only says to pay when the driver’s decision of whether or not to take you to the city depends on what you are planning to do when you get to the city. Which isn’t true in your setup, since you assume the driver can’t read faces.
a random letter contains about 7.8 (bits of information)
This is wrong, a random letter contains log(26)/log(2) = 4.7 bits of information.
This only works if Omega is willing to simulate the Yankees game for you.
I have tinnitus every time I think about the question of whether I have tinnitus. So do I have tinnitus all the time, or only the times when I notice?
This isn’t true. In constructivist logic, if you are trying to disprove a statement of the form “for all x, P(x)”, you do not actually have to find an x such that P(x) is false—it is enough to assume that P(x) holds for various values of x and then derive a contradiction. By contrast, if you are trying to prove a statement of the form “there exists x such that P(x) holds”, then you do actually need to construct an example of x such that P(x) holds (in constructivist logic at least).