The question required us to provide real numbers, and infinitesimals are not real numbers. Even if you allowed infinitesimals, though, 0 would still be the Nash equilibrium. After all, if 1/∞ is a valid guess, so is (1/∞)*(2/3), etc., so the exact same logic applies: any number larger than 0 is too large. The only value where everyone could know everyone else’s choice and still not want to change is 0.
Simetrical
Doing this with server side scripting is crazy. You’d have to submit a zillion forms and take a second to get the answer for each try. This is precisely the sort of thing client-side scripting is meant for.
Of course, the page would explain that it needed JavaScript, if you had JavaScript disabled, not just show a blank page.
I got the wrong rule, but it said I was right because I made only one mistake. I thought the rule was that a sequence was awesome if it was an increasing arithmetic progression. The only one of your examples at the end that contradicted this was 2, 9, 15. All the other awesome ones were, in fact, increasing arithmetic progressions: five out of the six awesome sequences you gave at the end. You should probably cut that down to two or three, so I’d have lost.
That clears things up a lot. I hadn’t really thought about the multiple-models take on it (despite having read the “prior probabilities as mathematical objects” post). Thanks.
Even accepting the premise that voting for the proposition was clearly wrong, that’s a single anecdote. It does nothing to demonstrate that Mormons are overall worse people than atheists. It is only a single point in the atheists’ favor. I could respond with examples of atheists doing terrible things, e.g., the amount of suffering caused by communists.
Anecdotes are not reliable evidence; you need a careful, thorough, and systematic analysis to be able to make confident statements. It’s really surprised me how commonly people supply purely anecdotal evidence here and expect it to be accepted (and how often it is accepted!). This is a site all about promoting rationalism, and part of that is reserving judgment unless you have good evidence.
I really don’t think a systematic analysis of the morality of Mormons vs. atheists exists, for any given utility function. That kind of analysis is probably close to impossible, in fact, even if you can precisely specify a utility function that a lot of people will agree on. To begin with, it would absolutely have to be controlled to be meaningful ― the cultural, etc. backgrounds of atheists are surely not comparable on average to those of Mormons.
I think this is an issue that rationalists just need to admit uncertainty about. That’s life, when you’re rational. Only religious people get to be certain most of the time about moral issues. A Mormon asked the same question would be able to say with confidence that the atheists caused more evil, since not following Mormonism is so evil that it would clearly outweigh any minor statistical differences between the two groups in terms of things like violent crime. If you believe in utility functions that depend on all sorts of complex empirical questions, you really can’t answer most moral questions very confidently.
I think this post could have been more formally worded. It draws a distinction between two types of probability assignment, but the only practical difference given is that you’d be surprised if you’re wrong in one case but not the other. My initial thought was just that surprise is an irrational thing that should be disregarded ― there’s no term for “how surprised I was” in Bayes’ Theorem.
But let’s rephrase the problem a bit. You’ve made your probability assignments based on Omega’s question: say 1⁄12 for each color. Now consider another situation where you’d give an identical probability assignment. Say I’m going to roll a demonstrated-fair twelve-sided die, and ask you the probability that it lands on one. Again, you assign 1⁄12 probability to each possibility.
(Actually, these assignments are spectacularly wrong, since they give a zero probability to all other colors/numbers. Nothing deserves a zero probability. But let’s assume you gave a negligible but nonzero probability to everything else, and 1⁄12 is just shorthand for “slightly less than 1⁄12, but not enough to bother specifying”.)
So as far as everything goes, your probability assignments for the two cases look identical up to this point. Now let’s say I offer you a bet: we’ll go through both events (drawing a bead and putting it back, or rolling the die) a million times. If your estimate of the probability of red/one was within 1% of correct in that sample, I give you $1000. Otherwise, you give me $1000.
In the case of the die, we would all take the bet in a heartbeat. We’re very sure that our figures are correct, since the die is demonstrated to be fair, and 1% is a lot of wiggle room for the law of large numbers. But you’d have to be crazy to take the same bet on the jar, despite having assigned a precisely identical chance of winning.
So what’s the difference? Isn’t all the information you care about supposed to be encapsulated in your probability distribution? What is the mathematical distinction between these two cases that causes such a clear difference in whether a given bet is rational? Are we supposed to not only assign probabilities to which events will occur, but also to our probabilities themselves, ad infinitum?
I see this conclusion as a mistake: being surprised is a way of translating between intuition and explicit probability estimates. If you are not surprised, you should assign high enough probability, and otherwise if you assign tiny probability, you should be surprised (modulo known mistakes in either representation).
That’s not true at all. Before I’m dealt a bridge hand, my probability assignment for getting the hand J♠, 8♣, 6♠, Q♡, 5♣, Q♢, Q♣, 5♡, 3♡, J♣, J♡, 2♡, 7♢ in that order would be one in 3,954,242,643,911,239,680,000. But I wouldn’t be the least bit surprised to get it.
In the terminology of statistical mechanics, I guess surprise isn’t caused by low-probability microstates ― it’s caused by low-probability macrostates. (I’d have been very surprised if that were a full suit in order, despite the fact that a priori that has the same probability.) What you define as a macrostate is to some extent arbitrary. In the case of bridge, you’d probably divide up hands into classes based on their utility in bridge, and be surprised only if you get an unlikely type of hand.
In this case, I’d probably divide the outcomes up into macrostates like “red”, “some other bright color like green or blue”, “some other common color like brown”, “a weird color like grayish-pink”, and “something other than a solid-colored ball, or something I failed to even think of”. Each macrostate would have a pretty high probability (including the last: who knows what Omega’s up to?), so I wouldn’t be surprised at any outcome.
This is an off-the-cuff analysis, and maybe I’m missing something, but the idea that any low-probability event should be surprising certainly can’t be correct.
Huh. Do you need me to post a few dozen links to articles detailing incidents where Mormons did evil acts because of their religious beliefs? I mean, Mormonism isn’t as inherently destructive as Islam, but it’s not Buddhism either.
Do you have empirical evidence that Mormons are more likely to cause harm than atheists? (Let’s say in the clear-cut sense of stabbing people instead of in the sense of spreading irrationality.) Mormons might do more bad things because their god requires it, but atheists might do more bad things because they don’t have a god to require otherwise. They might be more likely to become nihilists or solipsists and not care about other people, say, acting purely selfishly. A priori, I have no idea which one is correct.
It seems that as a rationalist, you should be wary of assigning high probabilities here without direct empirical evidence. Especially since you presumably suffer from in-group bias. But perhaps you’re aware of studies that support your view that religion is harmful in a simple sense?
(If you consider spreading religion inherently evil, then you have more reason to presume that Mormonism is harmful. You would still have to argue that the harm outweighs any possible benefit, but you’d have a stronger case for assuming that. However, by your comparisons to Islam and Buddhism you seem to mean plain old violence and so forth.)
If the question is “Should Wednesday, while not exactly choosing to believe religion, avoid thinking about it too hard because she thinks doing so will make her an atheist?,” then she’s already an atheist on some level because she thinks knowing more will make her more atheist, which implies atheism is true. This reduces to the case of deception, which you seem to be against unconditionally.
That’s not necessarily true. Perhaps she believes Mormonism is almost certainly right, but acknowledges that she’s not fully rational and might be misled if she read too many arguments against it. Most Christians believe in the idea that God (or Satan) tempts people to sin, and that avoiding temptation is a useful tactic to avoid sin. Kind of like avoiding stores where candy is on display if you’re trying to lose weight, say. You know what’s right in advance, but you’re afraid of losing resolve.
Certainly whatever your beliefs, some people who disagree with you are sufficiently charismatic and good at rhetoric that they might persuade you if you give them the chance. (Well, for most of us, anyway.) How many atheist Less Wrongers would be able to withstand lengthy debate with very talented missionaries? Some, certainly. Most, probably. All? I doubt it.
Overall, though, an excellent response, and I agree with almost all the rest of it.
For what it’s worth, if you’re using MediaWiki—I’m a MediaWiki developer and would be happy to help out if anyone wants to know “how do I do X” or otherwise get assistance of some kind setting up or configuring the wiki.
Like Mark Twain’s definition of a classic: “Something that everybody wants to have read and nobody wants to read.”
Well, everyone sharing the exact same opinion would be stable.
The question “Where did people come from?” is one that you’d expect to be answerable, and therefore a reasonable question to ask. We might, in principle, be able to do research in the physical world to figure out where we came from, since physical events (such as the appearance of a new species) leave traces in the physical world that we might be able to detect long after the fact. Likewise, intuition suggests that everything in the physical world comes from somewhere, and so an answer of “We were always here” seems intuitively unlikely.
On the other hand, if you ask “Where did God come from?”, you’re talking about an entity that (in the case of a Jewish-style God) predated all physical existence. There’s no reason to expect us to be able to figure out where God came from, if a God exists. And since God doesn’t have to play by the rules of the physical world, “God always existed” sounds much more palatable than “humans always existed”: God isn’t something we expect to obey our intuition. God is supposed to be inherently perfect and unchanging, so “God always existed” fits in nicely with our picture of God.
Now, you can fairly say that this is all completely unverifiable and can be matched up to any facts you feel like by altering details. You’d be totally right. But there are real reasons for why many people ask “Where did humans come from?” and don’t ask “Where did God come from?” It’s not just because they’re “not allowed” to ask those questions—the people who came up with the answers sure were allowed to ask them! It’s because the idea of an eternal God is intuitively more satisfactory than the idea of eternal humans, even if this breaks down upon closer inspection.
Yes it did and does, though you’re left having to handwave away the question of “how did God arise?”
Yup, but those seem less troubling if anything than the questions atheism would be unable to answer at the time.
All I ask is that laws have 1) a clearly defined goal of solving a problem that society wants to solve, and 2) empirical evidence (gathered after the fact, if needed) that they are doing what they were intended to do with acceptable side-effects.
How can you gather the evidence after the fact without experimentation? You have to try out alternative copyright schemes, for instance, to test whether it’s actually working well. Otherwise I don’t know what you’d consider empirical evidence for success.
Marijuana criminalization seems to badly fail at least the latter, and the former depending on what problem you think it’s solving.
How can you tell? What would the actual effects of decriminalizing it be? What would widespread marijuana use do to traffic accidents, the intelligence of the general public, etc.? You can argue that it’s surely better than alcohol and tobacco, but the obvious counterargument is that those are too entrenched to do away with (especially alcohol) and therefore have to be grandfathered in for pragmatic reasons.
Who’s right? Maybe you’re right, but the only way to tell is to experiment. I’d be all in favor of more experiments in things like criminal law, to be sure, but I don’t think the evidence in favor of a marijuana ban at present is much worse than that in favor of copyright.
In the Middle Ages, I’m not sure atheism would be too much more rational than theism, in any sense. To the average European in the year 1000, being an atheist would probably be about as rational as being a heliocentrist, i.e., not at all. We know all the arguments in favor of atheism and heliocentrism, but they didn’t. No amount of rationalism is going to let you judge things based on evidence you don’t know about.
The average person back then could probably have given you plenty of evidence for God’s existence. The evidence would be weak by modern standards, but not by medieval standards. No one was conducting scientific studies then: almost any assertion not directly checkable was supported by pretty weak evidence. Theism might make few predictions and test them rarely, but the same was true of all the alternatives. On the other hand, theism at least had coherent and consistent answers to a slew of basic questions like “How did life arise?”, which atheism did not.
So I think the answer is that the only rational principle that would have allowed you to deconvert in medieval times would be “single-handedly reconstructing modern science”.
Well, if you’re altruistic in the sense you describe, you don’t have the utility function I gave in my scenario, so your result will vary. If you don’t really mind going to hell too much, comparatively, then the argument doesn’t work well.
For what it’s worth, I’ve recently started reading this site and am an Orthodox Jew. I have no particular plans to stop reading the site for the time being, because it’s often rather interesting.
It may be worth considering that while rationalists may feel they don’t need religion, almost all religious people would acknowledge the need for rationality of some kind. If rationality is about achieving your goals as effectively as possible (as some here think), then does it suddenly not work if your goals are “obey the Bible”? No—your actions will be different from someone with different goals (utilitarianism, etc.), but most of the thought-process is the same.
Suppose you have an extremely high prior probability for God sending doubters to Hell, for whatever reason. Presumably the utility of going to Hell is very, very low. Then, as a rational Bayesian, you should avoid any evidence that would tend to cause you to doubt God, shouldn’t you? I certainly don’t know much of Bayesian probability, but I can’t see any flaw in that logic.
The question seems rather similar to that of Omega. The winners are those who can convince themselves, by any means, that a particular belief is right. In that sense, God could be said to reward irrationality, just like Omega. The only real difference is that in Omega’s case, nobody doubts the fact that Omega exists and is doing the judging in the first place. I don’t think that’s essential to the nature of the problem, although it makes it harder for most rationalists to dismiss.
Of course, “rationalism” as used on this site often implies acceptance of empiricism, Occam’s razor, falsifiability, and things like that, not just pure Bayesian logic with arbitrary priors. But of course, I almost completely accept all those things, and am tolerant of those who accept them more thoroughly than I. It should therefore not be very surprising that I’d see value in this site, along with other religious people with similar attitudes (however few there may be).
I do think that at least being polite toward religion (which doesn’t always happen here) is more likely to advance the goals of this site than otherwise. It doesn’t help anyone’s goals to drive people away before you can deconvert them; and even if you can’t deconvert them, you still gain by helping them think more logically (by your definitions) in other areas.
Can you name any evidence supporting the necessity of, to pick a moderately troublesome example off the top of my head, copyright? I’m not aware of any alternatives being tried (successfully or otherwise) in modern countries, so there’s no actual evidence for its necessity. Shall we abolish governmental protection of intellectual property? That’s a somewhat tenable position (donation-based profit, etc.), but I’m guessing most people here don’t hold it.
I suspect that if your suggestion’s consequences were carefully inspected, it would turn out to be more or less indistinguishable from a very extreme form of libertarianism. I’m aware of no clear evidence that prohibiting civilian possession of assault helicopters and anti-tank missiles is beneficial. Are you? Perhaps they’d be primarily used to resist oppressive governments.
It’s also worth observing that plenty of professed rationalists take the exact opposite approach to you. They follow the precautionary principle: ban anything unless we have evidence it’s not harmful.
I don’t think much of either approach. Suggesting that we should have a hard-and-fast rule of what we have to do in the absence of clear evidence is a bad idea. Humans have capacities for intuition and logic in addition to our capacity to gather empirical evidence. If evidence is lacking, we need to take a best guess, not just say “let’s permit/ban it”.
In that case Warrigal would have said “rational” rather than “real”. Numbers such as 17π would presumably be fine too, not just fractions. “No funny business” presumably means “I’d better be able to figure out whether it’s the closest easily”. For instance, the number “S(12)/2^n, where S is the max shifts function and n is the smallest integer such that my number is less than 100” is technically well-defined, in a mathematical sense. But if you can actually figure out what it is, you could publish a paper about it in any journal of computer science you liked.