How do we assign zero probability to 0=1 when we can’t prove our logic consistent?
Watercressed
Concern trolling in the false flag political operation sense is a thing that happened
An example of this occurred in 2006 when Tad Furtado, a staffer for then-Congressman Charles Bass (R-NH), was caught posing as a “concerned” supporter of Bass’ opponent, Democrat Paul Hodes, on several liberal New Hampshire blogs, using the pseudonyms “IndieNH” or “IndyNH”. “IndyNH” expressed concern that Democrats might just be wasting their time or money on Hodes, because Bass was unbeatable.[37][38] Hodes eventually won the election.
Does fundamentalist Christianity indicate that the believer would be irrational about issues other than religion?
If yes, what’s the difference?
You could spend the tax-evaded income on the black market, since you’re hiding contraband from the police anyway.
Since when did epistemic rationality demand making the truth common knowledge? It just means you should know what’s true yourself.
Different information about part of nature is not sufficient to change an inference—the probabilities could be independent of the researcher’s intentions.
It depends on your priors
One of his “desiderata”, his principles of construction, is that the robot gives equal plausibility assignments to logically equivalent statements
I don’t see this desiderata. The consistency requirement is that if there are multiple ways of calculating something, then all of them yield the same result. A few minutes of thought didn’t lead to any way of leveraging a non 1 or zero probability for Prime(53) into two different results.
If I try to do anything with P(Prime(53)|PA), I get stuff like P(PA|Prime(53)), and I don’t have any idea how to interpret that. Since PA is a set of axioms, it doesn’t really have a truth value that we can do probability with. Technically speaking, Prime(N) means that the PA axioms imply that 53 has two factors. Since the axioms are in the predicate, any mechanism that forces P(Prime(53)) to be one must do so for all priors.
One final thing: Isn’t it wrong to assign a probability of zero to Prime(4), i.e. PA implies that 4 has two factors, since PA could be inconsistent and imply everything?
You can skip this pararaph and the next if you’re familiar with the problem. But if you’re not, here’s an illustration. Suppose your friend has some pennies that she would like to arrange into a rectangle, which of course is impossible if the number of pennies is prime. Let’s call the number of pennies N. Your friend would like to use probability theory to guess whether it’s worth trying; if there’s a 50% chance that Prime(N), she won’t bother trying to make the rectangle. You might imagine that if she counts them and finds that there’s an odd number, this is evidence of Prime(N); if she furthermore notices that the digits don’t sum to a multiple of three, this is further evidence of Prime(N). In general, each test of compositeness that she knows should, if it fails, raise the probability of Prime(N).
But what happens instead is this. Suppose you both count them, and find that N=53. Being a LessWrong reader, you of course recognize from recently posted articles that N=53 implies Prime(N), though she does not. But this means that P(N=53) ⇐ P(Prime(N)). If you’re quite sure of N=53—that is, P(N=53) is near 1—then P(Prime(N)) is also near 1. There’s no way for her to get a gradient of uncertainty from simple tests of compositeness. The probability is just some number near 1.
I don’t understand why this is a problem. You and your friend have different states of knowledge, so you assign different probabilities.
Survey Taken
*I keep seeing probability referred to as an estimation of how certain you are in a belief. And while I guess it could be argued that you should be certain of a belief relative to the number of possible worlds left or whatever, that doesn’t necessarily follow. Does the above explanation differ from how other people use probability?
One can ground probability in Cox’s Theorem, which uniquely derives probability from a few things we would like our reasoning system to do.
Why should anyone expect a specific kind of word input to be capable of persuading everyone? They’re just words, not magic spells.
The specific word sequence is evidence for something or other. It’s still unreasonable to expect people to respond to evidence in every domain, but many people do respond to words, and calling them just sounds in air doesn’t capture the reasons they do so.
I wouldn’t call it orthogonal either. Rationality is about having correct beliefs, and I would label a belief-based litmus test rational to the extent it’s correct.
Writing a post about how $political_belief is a litmus test is probably a bad idea because of the reasons you mentioned.
I generally agree with this post, but since people’s beliefs are evidence for how they change their beliefs in response to evidence, I would call it bias-inducing and usually tribal cheering instead of totally backwards.
Hash functions map multiple inputs to the same hash, so you would need to limit the input in some other way, and that makes it harder to verify.
The usual formulation of Omega does not lie.
If Omega maintains a 99.9% accuracy rate against a strategy that changes its decision based on the lottery numbers, it means that Omega can predict the lottery numbers. Therefore, if the lottery number is composite, Omega has multiple choices against an agent that one-boxes when the numbers are different and two-boxes when the numbers are the same: it can pick the same composite number as the lottery, in which case the agent will two-box and earn 2,001,000, or it can pick a different prime number, and have the agent one-box and earn 3,001,000. It seems like the agent that one-boxes all the time does better by eliminating the cases where Omega selects the same number as the lottery, so I would one box.
Above the top-level comment box, there’s an option to sort comments by date. Perhaps that should be the default.
But if we’re 99.9% confident that a child is going to die (say, they have a very terminal disease), is being cruel to the child 99.99% less bad?
If a post older than thirty days is downvoted, it doesn’t appear in the past 30 days karma.