Nope. He’s saying that based on his best analysis, it appears to be the case.
thrawnca
Does this particular thought experiment really have any practical application?
I can think of plenty of similar scenarios that are genuinely useful and worth considering, but all of them can be expressed with much simpler and more intuitive scenarios—eg when the offer will/might be repeated, or when you get to choose in advance whether to flip the coin and win 10000/lose 100. But with the scenario as stated—what real phenomenon is there that would reward you for being willing to counterfactually take an otherwise-detrimental action for no reason other than qualifying for the counterfactual reward? Even if we decide the best course of action in this contrived scenario—therefore what?
Also, there is the possibility of future scenarios arising in which Bob could choose to take comparable actions, and we want to encourage him in doing so. I agree that the cases are not exactly analogous.
people who are say, religious, or superstitious, or believe in various other obviously false things
Why do you think you know this?
A while ago, I came across a mathematics problem involving the calculation of the length of one side of a triangle, given the internal angles and the lengths of the other two sides. Eventually, after working through the trigonometry of it (which I have now forgotten, but could re-derive if I had to), I realised that it incorporated Pythagoras’ Theorem, but with an extra term based on the cosine of one of the angles. The cosine of 90 degrees is zero, so in a right-angled triangle, this extra term disappears, leaving Pythagoras’ Theorem as usual.
The older law that I knew turned out to be a special case of the more general law.
If the hypothetical Omega tells you that they’re is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?
This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I’m rather skeptical of the idea.
If you believe in G-d then you believe in a being that can change reality just by willing it
OK, so by that definition...if you instead believe in a perfect rationalist that has achieved immortality, lived longer than we can meaningfully express, and now operates technology that is sufficiently advanced to be indistinguishable from magic, including being involved in the formation of planets, then—what label should you use instead of ‘G-d’?
I know what a garage would behave like if it contained a benevolent God
Do you, though? What if that God was vastly more intelligent than us; would you understand all of His reasons and agree with all of His policy decisions? Is there not a risk that you would conclude, on balance, “There should be no ‘banned products shops’”, while a more knowledgeable entity might decide that they are worth keeping open?
it greatly changes the “facts” in your “case study”.
Actually, does it not add another level of putting Jesus on a pedestal above everyone else?
It changes the equation when comparing Jesus to John Perry (indicating that Jesus’ suffering was greatly heroic after all), but perhaps intensifies the “Alas, somehow it seems greater for a hero to have steel skin and godlike powers.”
(Btw I’m one of the abovementioned Christians. Just thought I’d point out that the article’s point is not greatly changed.)
you cannot be destroyed
In the sense that your mind and magic will hang around, yes. But your material form can still be destroyed, and material destruction of a Horcrux will destroy its ability to anchor the spirit.
So, if two people are mutual Horcruxen, you can still kill person 1, at which point s/he will become a disembodied spirit dependent on person 2, but will cease to be an effective horcrux for person 2. You can then kill person 2, which will permanently kill both of them.
All you really achieve with mutual Horcruxen is to make your Horcrux portable and fragile (subject to illness, aging, accident, etc).
It appears the key issue in creating conflict is that the two groups must not be permitted to get to know each other and become friendly
Because then, of course, they might start attributing each other’s negative actions to environmental factors, instead of assuming them to be based on inherent evil.
If those are the unfortunate downsides of policies that are worthwhile overall, then I don’t think that qualifies for ‘supervillain’ status.
I mean, if you’re postulating the existence of God, then that also brings up the possibility of an afterlife, etc, so there could well be a bigger picture and higher stakes than threescore years and ten. Sometimes it’s rational to say, That is a tragedy, but this course of action is still for the best. Policy debates should not appear one-sided.
If anything, this provides a possible answer to the atheist’s question, “Why would God allow suffering?”
Isn’t this over-generalising?
“religion makes claims, not arguments, and then changes its claims when they become untenable.” “claims are all religion has got” “the religious method of claiming is just ‘because God said so’”
Which religion(s) are you talking about? I have a hard time accepting that anyone knows enough to talk about all of them.
The happiness box is an interesting speculation, but it involves an assumption that, in my view, undermines it: “you will be completely happy.”
This is assuming that happiness has a maximum, and the best you can do is top up to that maximum. If that were true, then the happiness box might indeed be the peak of existence. But is it true?
Email sent about a week ago. Did it get spam-filtered?
I tend to think that the Bible and the Koran are sufficient evidence to draw our attention to the Jehovah and Allah hypotheses, respectively. Each is a substantial work of literature, claiming to have been inspired by direct communication from a higher power, and each has millions of adherents claiming that its teachings have made them better people. That isn’t absolute proof, of course, but it sounds to me like enough to privilege the hypotheses.
I think it’s pretty clear that animals can feel pain, distress, etc. So we should aim for practices that minimise those things. It’s certainly possible—though harder on a mass scale like factory farming.
Also, from a utilitarian perspective, it’s clear that eating plants is much more ecologically efficient than feeding plants to animals and then eating the animals. On the other hand, as Elo points out, there are crops and terrain that are not well suited to human food, and might more profitably be used to raise edible animals.
So I’d say that there could be an equilibrium, a point where our overall meat consumption is about right; less would be basically a wasted opportunity; more would be an inefficient use of resources and a risk of oppressive practices. And I’d say that that point is much lower than current overall consumption.
“if, for example, there was an Islamic theologian who offered to debate the issues with me then I would be inclined to do it and follow where the belief updates lead.”
Is that an open offer to theologians of all stripes?
In discussing Newcomb’s problem, Eliezer at one point stated, “Be careful of this sort of argument, any time you find yourself defining the “winner” as someone other than the agent who is currently smiling from on top of a giant heap of utility.”
This aligns well with a New Testament statement from Jesus, “Ye shall know them by their fruits...every good tree bringeth forth good fruit, but a corrupt tree bringeth forth evil fruit.”
So, I’m only a novice of the Bayesian Conspiracy, but I can calculate the breast cancer percentages and the red vs blue pearl probabilities. To answer Eliezer’s question, to convince me of the truth of Islam, it would have to show me better outcomes, better fruit, than Christianity, across the board. In many cases its principles don’t conflict with Christianity; but where they do, I would have to establish that resolving the conflict in favor of Islam will lead to better outcomes.
Consider, as just one example, the fruits identified by LW’s own Swimmer963, at http://lesswrong.com/lw/4pg/positive_thinking/ Would following Islam give me a better foundation than that for positive thinking, resilience, motivation for mutual help? Then I would be interested. Not convinced by that alone, but interested.
If Islam consistently offered a more coherent worldview that more effectively helped me to become a better person and achieve my goals, then I would have cause to consider that it might have more truth than what I believe now. As far as I have yet determined, it doesn’t; its teachings seem to be more limited, less useful, than what I already have.
Perhaps the only appropriate uses for probability 0 and 1 are to refer to logical contradictions (eg P & !P) and tautologies (P → P), rather than real-world probabilities?