my friends were ready to raise $100 so I would carry it out
Are you sure you want to call them “friends”? Willingness to pay to lower someone else’s status isn’t particularly friendly behaviour, even if the person “doesn’t care” about status.
my friends were ready to raise $100 so I would carry it out
Are you sure you want to call them “friends”? Willingness to pay to lower someone else’s status isn’t particularly friendly behaviour, even if the person “doesn’t care” about status.
The health hazard would probably be worth less (in absolute value) than the discussed reward of $200. The PR hazard, on the other hand, would justify your bottom line.
I haven’t been suggesting using (A or B) as a name for (not ((not A) and (not B))) in constructive logic where they aren’t equivalent. Rather, I have been suggesting using classical logic (where the above sentences are equivalent) with a constructivist interpretation, i.e. not making difference between “true” and “theorem”. But since it is possible for (A or B) to be a theorem and simultaneously for both A and B to be non-theorems, logical “or” would not have the same interpretation, namely it wouldn’t match the common language “or” (for when we say “A or B is true”, we mean that indeed one of them must be true).
Wouldn’t it be still possible for a constructivist to embrace classical logic and the theoremhood of TND? The constructivist would just have to admit that (A or B) could be true even if neither A nor B is true. (A or B) would still not be meaningless, its truth would imply that there is proof for neither (not A) nor (not B), so this reinterpretation of “or” doesn’t seem to be a big deal.
As I understand the responses most people think the main point of Newcomb’s problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix.
I am no expert on Newomb’s problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That’s as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor “Omega” is employed in wide range of different thought experiments here and it’s possible that your objection can be relevant to some of them.
I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn’t Prisoner’s dilemma, at least in the original form.
OK, I understand now that your point was that one can in principle avoid being predicted. But to put it as an argument proving irrelevance or incoherence of the Newcomb’s problem (not entirely sure that I understand correctly what you meant by “dissolve”, though) is very confusing and prone to misinterpretation. Newcomb’s problem doesn’t rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.
I still don’t understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don’t carry quantum widgets.
Tell that to the hypothetical obscurantist.
Edit: I find it mildly annoying when, answering a comment or post, people point out obvious things whose relevance to the comment / post is dubious without further explanation. If you think that the non-equivalence of the mentioned beliefs somehow inplies the impossibility to extrapolate obscurantist values, please elaborate. If you just thought that I might have commited a sloppy inference and it would be cool to correct me on it, please don’t do that. It (1) derails the discussion to issues of uninteresting nitpickery and (2) motivates the commenters to clutter their comments with disclaimers in order to avoid being suspected of sloppy reasoning.
(1) Why would Joe intend to use the random process in his decision? I’d assume that he wants million dollars much more than to prove Omega’s fallibility (and that only with 50% chance).
(2) Even if Joe for whatever reason prefers proving Omega’s fallibility, you can stipulate that Omega gives the quest only to people without semitransparent mirrors at hand.
(3) How is this
First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.
compatible with this
So I would be very very VERY surprised if I saw Omega pull this trick 100 times in a row and I could somehow rule out Stage Magic (which I could not).
(emphasis mine)?
Note about terminology: on LW, dissolving a question usually refers to explaining that the question is confused (there is no answer to it as it is stated) together with pointing out the reasons why such a question seems sensible at the first sight. What you are doing is not dissolving the problem, it’s rather fighting the hypo.
Couldn’t this be said about any inductive method, at least in cases when the method works?
There are obscurantists who wear their obscurantism as attire, proudly claiming that it is impossible to know whether God exists. It can be said, perhaps, that such an obscurantist has a preference for not knowing the answer to the question, for never storing a belief of “the God does (not) exist” in his brain. But still all the obscurantist’s decisions are the same as if he believed that there is no God—the obscurantist belief bears no influence on other preferences. In such a case, you may well argue that the extrapolated volition of the obscurantist is to act as if he knew the answer and therefore the obscurantist beliefs are shattered. But this is also true for his non-extrapolated volition. If the non-extrapolated volition already ignores the obscurantist belief and can coexist with it, why is this possibility excluded for the extrapolated volition? Because of the “coherent” part? Does coherence of volition require that one is not mistaken about one’s actual desires? (This is a honest question; I think that “volition” refers to the set of desires, which is to be made coherent by extrapolation in case of CEV, but that it doesn’t refer to beliefs about the desires. But I haven’t been interested in CEV that much and may be mistaken about this.)
The more interesting case is an obscurantist who holds obscurantism as a worldview with real consequences. Talking about things that are plausible (I am not sure whether this kind of obscurantists exist in non-negligible numbers), imagine a woman who holds that the efficacy of homoeopathics can never be established with any reasonable certainty. Now she may get cancer and have two possibilities for treatment: a conventional, with 10% chance of success, and a homoeopathic, with 0.1% chance (equal to that of a spontaneous remission). But, in accordance with her obscurantism, she believes that assigning anything except 50% for homoeopathy working would mean that we know the answer here, and since we can’t know, homoeopathy has indeed success chance of 50%.
Acting on these beliefs, she decides for the homoeopathic treatment. One of her desires is to survive, which leads to choosing the conventional treatment upon extrapolation, thus creating conflict with the actual decision. But isn’t it plausible that her another desire, namely to ever decide as if the chance of homoeopathy working were 50%, is enough strong to survive the extrapolation and take precedence upon the desire to survive? People have died for their beliefs many times.
First of all, is the existence of such an agent implausible? Not really, considering there are masochists out there and that, to some individuals, ignorance is bliss.
Why argue for plausibility of something when it clearly exists? I have personally met several people who fit your definition of obscurantist and I don’t doubt that you have too.
How much, then, will be left of an obscurantist’s identity upon coherently extrapolating their desires? The answers is probably not much, if anything at all.
Is there some argument for the probable answer? I don’t find it obvious.
Bad posts often get a strong karma hit initially when the most vigilant readers check them and later return towards zero. It is possible (although not likely) that two months from now the post would stand at +2, your vote contributing to the positive score.
a way of doing induction without trying to solve the problem of induction
Well, this is the thing I have problems to understand. The problem of induction is a “problem” due to the existence of incompatible philosophical approaches; there is no “problem of deduction” to solve because everybody agrees how to do that (mostly). Doing induction without solving the problem would be possible if people agreed how to do it and the disagreement was confined to inconsequential philosophical interpretations of the process. Then it would indeed be wise to do the practical stuff and ignore the philosophy.
But this is probably not the case; people seem to disagree about how to do the induction, and there are people (well represented on this site) who have reservations against frequentist hypothesis testing. I am confused.
My understanding of standardised hypothesis tests was that they serve the purposes of
avoiding calculations dependent on details of the alternative hypothesis
providing objective criteria to decide under uncertainty
There are practical reasons for both purposes. (1) is useful because the alternative hypothesis is usually more complex than the null and can have lots of parameters, thus calculating probabilities under the alternative may become impossible, especially with limited computing power. As for (2), science is a social institution—journal editors need a tool to reject publishing unfounded hypotheses without risk of being accused of having “unfair” priors, or whatever.
However I don’t understand how exactly hypothesis tests help to solve the philosophical problems with induction. Perhaps it would be helpful to list several different popular philosophical approaches to induction (not sure what are the major competing paradigms here—perhaps Bayesianism, falsificationism, “induction is impossible”?), present example of problems where the proponents of particular paradigms disagree about the conclusion, and show how a hypothesis test could resolve the disagreement?
I think North Korea is no problem for the quoted sentence. I interpret it as saying that the government doesn’t care about the wants of non-citizens, rather than asserting that the government cares about a significant number of citizens.
Nevertheless, even assuming this interpretation it is still not self-evident.
The historical Steelman was also a strongman, at least according to Wikipedia.
Wedrifid’s interpretation is the intended one. I agree that the chosen formulation wasn’t particularly clear.
Bayes was a priest, after all. Now divine quote of gay Turing would be a different feat altogether.
Not sure I want to know that.
“Obviously bad” isn’t a utilitarian justification.
To play the Devil’s advocate:
I expect you seriously underestimate the strength of Namboothiris’ feelings. To us it seems like pure religious madness, moreover we feel outrage at the extreme inequality existing because of ancient caste prejudices, so we tend to sympathise with the Untouchables and regard the traditional Brahmin rights as unjust. But it doesn’t seem that way from the Brahmin perspective.
Some of the unpleasantness connected with non-consensual sex is probably status related—being raped makes one lose a lot of status and we tend to avoid status loss. I wonder how much less serious problem would rape become in a society where the negative status effects were removed. We find it acceptable to solve the caste problem by rebuilding the society and changing the people’s values—even when many people are objecting; why not attempt the same approach to rape?
(Disclaimer: I think that caste society is unjust and I don’t actually wish to change our society to be more rape-tolerant. But I am no utilitarian. This comment is a warning against creating fake utilitarian explanations of moral judgements made on non-utilitarian grounds.)