I don’t get it really. I mean, I get the method, but not the formula. Is this useful for anything though?
Also, a simpler method of explaining the Monty Hall problem is to think of it if there were more doors. Lets say there were a million (thats alot [“a lot” grammar nazis] of goats.) You pick one and the host elliminates every other door except one. The probability you picked the right door is one in a million, but he had to make sure that the door he left unopened was the one that had the car in it, unless you picked the one with a car in it, which is a one in a million chance.
It might help to read the sequences, or just read Jaynes. In particular, one of the central ideas of the LW approach to rationality is that when one encounters new evidence one should update one’s belief structure based on this new evidence and your estimates using Bayes’ theorem. Roughly speaking, this is in contrast to what is sometimes described as “traditional rationalism” which doesn’t emphasize updating on each piece of evidence but rather on updating after one has a lot of clearly relevant evidence.
Edit: Recommendation of Map-Territory sequence seems incorrect. Which sequence is the one to recommend here?
Updating your belief based on different pieces of evidence is useful, but (and its a big but) just believing strange things based on imcomplete evidence is bad. Also, this neglects the fact of time. If you had an infinite amount of time to analyze every possible scenario, you could get away with this, but otherwise you have to just make quick assumptions. Then, instead of testing wether these assumptions are correct, you just go with them wherever it takes you. If only you could “learn how to learn” and use the Bayesian method on different methods of learning; eg, test out different heuristics and see which ones give the best results. In the end, you find humans already do this to some extent and “traditional rationalism” and science is based off of the end result of this method. Is this making any sense? Sure, its useful in some abstract sense and on various math problems, but you can’t program a computer this way, nor can you live your life trying to compute statistics like this in your head.
Other than that, I can see different places where this would be useful.
nor can you live your life trying to compute statistics like this in your head
And so it is written, “Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.”
I may not be the best person to reply to this given that I a) am much closer to being a traditional rationalist than a Bayesian and b) believe that the distinction between Bayesian rationalism and traditional rationalism is often exaggerated. I’ll try to do my best.
Updating your belief based on different pieces of evidence is useful, but (and its a big but) just believing strange things based on incomplete evidence is bad.
So how do you tell if a belief is strange? Presumably if the evidence points in one direction, one shouldn’t regard that belief as strange. Can you give an example of a belief that should considered not a good belief to have due to strangeness that one could plausibly have a Bayesian accept like this?
Also, this neglects the fact of time. If you had an infinite amount of time to analyze every possible scenario, you could get away with this, but otherwise you have to just make quick assumptions.
Well yes, and no. The Bayesian starts with some set of prior probability estimates, general heuristics about how the world seems to operate (reductionism and locality would probably be high up on the list). Everyone has to deal with the limits on time and other resources. That’s why for example, if someone claims that hopping on one foot cures colon cancer we don’t generally bother testing it. That’s true for both the Bayesian and the traditionalist.
Sure, its useful in some abstract sense and on various math problems, but you can’t program a computer this way, nor can you live your life trying to compute statistics like this in your head
I’m curious as to why you claim that you can’t program a computer this way. For example, automatic Bayesian curve fitting has been around for almost 20 years and is a useful machine learning mechanism. Sure, it is much more narrow than applying Bayesianism to understanding reality as a whole, but until we crack the general AI problem, it isn’t clear to me how you can be sure that that’s a fault of the Bayesian end and not the AI end. If we can understand how to make general intelligences I see no immediate reason why one couldn’t make them be good Bayesians.
I agree that in general, trying to generally compute statistics in one’s head is difficult. But I don’t see why that rules out doing it for the important things. No one is claiming to be a perfect Bayesian. I don’t think for example that any Bayesian when walking into a building tries to estimate the probability that the building will immediately collapse. Maybe they do if the building is very rickety looking, but otherwise they just think of it as so tiny as to not bother examining. But Bayesian updating is a useful way of thinking about many classes of scientific issues, as well as general life issues (estimates for how long it will take to get somewhere, estimates of how many people will attend a party based on the number invited and the number who RSVPed for example both can be thought of in somewhat Bayesian manners). Moreover, forcing oneself to do a Bayesian calculation can help bring into the light many estimates and premises that were otherwise hiding behind vagueness or implicit structures.
So how do you tell if a belief is strange? Presumably if the evidence points in one direction, one shouldn’t regard that belief as strange. Can you give an example of a belief that should considered not a good belief to have due to strangeness that one could plausibly have a Bayesian accept like this?
Well for example, if you have a situation where the evidence leads you to believe that something is true, and there is an easy, simple, reliable test to prove its not true, why would the bayesian method waste its time? Immagine you witness something which could be possible, but its extremely odd. Like gravity not working or something. It could be a hallucinations, or a glitch if your talking about a computer, and there might be an easy way to prove it is or isn’t. Under either scenerio, whether its a hallucination or reality is just weird, it makes an assumption and then has no reason to prove whether this is correct. Actually, that might have been a bad example, but pretty much every scenario you can think of, where making an assumption can be a bad thing and you can test the assumptions, would work.
I’m curious as to why you claim that you can’t program a computer this way. For example, automatic Bayesian curve fitting has been around for almost 20 years and is a useful machine learning mechanism. Sure, it is much more narrow than applying Bayesianism to understanding reality as a whole, but until we crack the general AI problem, it isn’t clear to me how you can be sure that that’s a fault of the Bayesian end and not the AI end. If we can understand how to make general intelligences I see no immediate reason why one couldn’t make them be good Bayesians.
Well if you can’t program a viable AI out of it, then its not a universal truth to rationality. Sure, you might be able to use if its complimented and powered by other mechanisms, but then its not a unvirsal truth, is it. That was my point. If it is an important tool, then I have no doubt that once we make AI, it will discover it itself, or may even have it in its original program.
Well for example, if you have a situation where the evidence leads you to believe that something is true, and there is an easy, simple, reliable test to prove its not true, why would the bayesian method waste its time? Immagine you witness something which could be possible, but its extremely odd. Like gravity not working or something. It could be a hallucinations, or a glitch if your talking about a computer, and there might be an easy way to prove it is or isn’t. Under either scenerio, whether its a hallucination or reality is just weird, it makes an assumption and then has no reason to prove whether this is correct. Actually, that might have been a bad example, but pretty much every scenario you can think of, where making an assumption can be a bad thing and you can test the assumptions, would work.
Firstly, priors are important; if something has a low prior probability, it’s not generally going to get to a high probability quickly. Secondly, not all evidence has the same strength. Remember in particular that the strength of evidence is measured by the likelihood ratio. If you see something that could likely be caused by hallucinations, that isn’t necessarily very strong evidence for it; but hallucinations are not totally arbitrary, IINM. Still, if you witness objects spontaneously floating off the ground, even if you know this is an unlikely hallucination, the prior for some sort of gravity failure will be so low that the posterior will probably still be very low. Not that those are the only two alternatives, of course.
Well for example, if you have a situation where the evidence leads you to believe that something is true, and there is an easy, simple, reliable test to prove its not true, why would the bayesian method waste its time? Immagine you witness something which could be possible, but its extremely odd. Like gravity not working or something. It could be a hallucinations, or a glitch if your talking about a computer, and there might be an easy way to prove it is or isn’t. Under either scenerio, whether its a hallucination or reality is just weird, it makes an assumption and then has no reason to prove whether this is correct. Actually, that might have been a bad example, but pretty much every scenario you can think of, where making an assumption can be a bad thing and you can test the assumptions, would work.
If there is an “easy, simple, reliable test” to determine the claim’s truth within a high confidence, why do you think a Bayesian wouldn’t make that test?
Well if you can’t program a viable AI out of it, then its not a universal truth to rationality.
Can you expand your logic for this? In particular, it seems like you are using a definition of “universal truth to rationality” which needs to be expanded out.
If there is an “easy, simple, reliable test” to determine the claim’s truth within a high confidence, why do you think a Bayesian wouldn’t make that test?
Because its not a decision making theory, but a one that judges probability. The bayesian method will examine what it has, and decide the probability of different situations. Other then that, it doesn’t actually do anything. It takes an entirely different system to actually act on the information given. If it is a simple system and just assumes to be correct whichever one has the highest probability, then it isn’t going to bother testing it.
The bayesian method will examine what it has, and decide the probability of different situations. Other then that, it doesn’t actually do anything. It takes an entirely different system to actually act on the information given. If it is a simple system and just assumes to be correct whichever one has the highest probability, then it isn’t going to bother testing it.
But a Bayesian won’t assume which one has the highest probability is correct. That’s the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn’t going to lie to itself and say that the most probable claim now has a probability of 1. That’s not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn’t do that. I don’t know where you are getting this notion of a “simple system” that did that. If it did, it wouldn’t be a Bayesian.
But a Bayesian wont’ assume which one has the highest probability is correct. That’s the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn’t going to lie to itself and say that the most probable claim now has a probability of 1. That’s not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn’t do that. I don’t know where you are getting this notion of a “simple system” that did that. If it did, it wouldn’t be a Bayesian.
I’m not exactly sure what you mean by all of this. How does a bayesian system make decisions if not by just going on its most probable hypothesis?
To make decisions, you combine probability estimates of outcomes with a utility function, and maximize expected utility. A possibility with very low probability may nevertheless change a decision, if that possibility has a large enough effect on utility.
See the reply I made to AlephNeil. Also, this still doesn’t change my scenario. If theres a way to test a hypothesis, I see no reason the bayesian method ever would, even if it seems like common sense to look before you leap.
Anyone know why I can only post comments every 8 minutes? Is the bandwidth really that bad?
Bayesianism is only a predictor; it gets you from prior probabilities plus evidence to posterior probabilities. You can use it to evaluate the likelihood of statements about the outcomes of actions, but it will only ever give you probabilities, not normative statements about what you should or shouldn’t do, or what you should or shouldn’t test. To answer those questions, you need to add a decision theory, which lets you reason from a utility function plus a predictor to a strategy, and a utility function, which takes a description of an outcome and assigns a score indicating how much you like it.
The rate-limit on posting isn’t because of bandwidth, it’s to defend against spammers who might otherwise try to use scripts to post on every thread at once. I believe it goes away with karma, but I don’t know what the threshold is.
Anyone know why I can only post comments every 8 minutes? Is the bandwidth really that bad?
You face limits on your rate of posting if you’re at or below 0 karma, which seems to be the case for you. How you got modded down so much, I’m not so sure of.
How you got modded down so much, I’m not so sure of.
Bold, unjustified political claims. Bold, unjustified claims that go against consensus. Bad spelling/grammar. Also a Christian, but those comments don’t seem to be negative karma.
Yeah, I hadn’t been following Houshalter very closely, and the few that I did see weren’t about politics, and seemed at least somewhat reasonable. (Maybe I should have checked the posting history, but I was just saying I’m not sure, not that the opposite would be preferable.)
What bold unjustified political claims? You do realise that every other person on this site I’ve met so far has some kind of extreme political view. I thought I was kind of reasonable.
Bold, unjustified claims that go against consensus.
In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just “go along with it”.
Bad spelling/grammar.
What’s wrong with my spelling/grammar? I double check everything before I post it!
Bold, unjustified claims that go against consensus.
In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just “go along with it”.
No. In other words, you’ve made claims that assume statements against consensus, often without even realizing it or giving any justification when you do so. As I already explained to you, the general approach at LW has been hashed out quite a bit. Some people (such as myself) disagree with a fair bit. For example, I’m much closer to being a traditional rationalist than a Bayesian rationalist and I also assign a very low probability to a Singularity-type event. But I’m aware enough to know when I’m operating under non-consensus views so I’m careful to be explicit about what those views are and if necessary, note why I have them. I’m not the only such example. Alicorn for example (who also replied to this post) has views on morality that are a distinct minority in LW, but Alicorn is careful whenever these come up to reason carefully and make her premises explicit. Thus, the comments are far more likely to be voted up than down.
Your persecuting me because of my religion!?
Well, for the people complaining about grammar: “Your” → “You’re”
But no, you’ve only mentioned your religious views twice I think, and once in passing. The votes down there were I’m pretty sure because your personal religious viewpoint was utterly besides the point being made about the general LW consensus.
What bold unjustified political claims? You do realise that every other person on this site I’ve met so far has some kind of extreme political view. I thought I was kind of reasonable.
Emphasis on ‘unjustified’. Example. This sounds awfully flippant and sure of yourself—“This system wouldn’t work at all”. Why do you suppose so many people, including professional political scientists / political philosophers / philosophers of law think that it would work? Do you have an amazing insight that they’re all missing? Sure, there are people with many different positions on this issue, but unless you’re actually going to join the debate and give solid reasons, you weren’t really contributing anything with this comment.
Also, comments on political issues are discouraged, as politics is the mind-killer. Unless you’re really sure your political comment is appropriate, hold off on posting it. And if you’re really sure your political comment is too important not to post, you should check to make sure you’re being rational, as that’s a good sign you’re not.
In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just “go along with it”.
Again, emphasis on ‘unjustified’. If people here believe something, there are usually very good reasons for it. Going against that without at least attempting a justification is not recommended. Here are hundreds of people who have spent years trying to understand how to, in general, be correct about things, and they have managed to reach agreement on some issues. You should be shaken by that, unless you know precisely where they’ve all gone wrong, and in that case you should say so. If you’re right, they’ll all change their minds.
Also a Christian
Your[sic] persecuting me because of my religion!?
You’ve indicated you have false beliefs. That is a point against you. Also if you think the world is flat, the moon is made of green cheese, or 2+2=3, and don’t manage to fix that when someone tells you you’re wrong, rationalists will have a lower opinion of you. If you manage to convince them that 2+2=3, then you win back more points than you’ve lost, but it’s probably not worth the try.
Emphasis on ‘unjustified’. Example. This sounds awfully flippant and sure of yourself—“This system wouldn’t work at all”. Why do you suppose so many people, including professional political scientists / political philosophers / philosophers of law think that it would work?
Because they don’t!? I was talking about how the FDA is right, the “wouldn’t work at all” is an unregulated drug industry. If you don’t like my opinion, fine, but lots of people would agree with me including many of those “political philosophers” you speak so highly of.
If you’re right, they’ll all change their minds.
In my expirience, people rarely change they’re minds after their sure of something. Thats not to say it doesn’t happen, otherwise why would I try. The point of argument is to try to get both people on the same ground, then they can both choose for themselves which is right, even if they don’t publicly admit “defeat”.
You’ve indicated you have false beliefs.
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
If you manage to convince them that 2+2=3, then you win back more points than you’ve lost, but it’s probably not worth the try.
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
The extremely low prior probability and the total lack of evidence allow us, as Bayesians, to dismiss it as false. Taboo the word “proof”, because it’s not useful to us in this context.
Because they don’t!? I was talking about how the FDA is right, the “wouldn’t work at all” is an unregulated drug industry. If you don’t like my opinion, fine, but lots of people would agree with me including many of those “political philosophers” you speak so highly of.
Speaking as someone who thinks that the general outline of your point in that thread is the correct conclusion, the problem is you gave zero evidence or logic for why you would be correct. Suppose someone says “Hey we do things like X right now, but what if we did Y instead?” You can’t just respond “Y won’t work.” If you say “Y won’t work because of problems A,B, C” or “X works better than Y because of problems D,E,F” then you’ve got a discussion going. But otherwise, all you have is someone shouting “is not”/”is too.”
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
If we’re talking about the religion matter again, which it seems we are, weren’t you already linked to the Mysterious Answers sequence? And I’m pretty sure you were explicitly given this post. Maybe instead of waiting 8 minutes to post between that time read some of the things people have asked you to read? Or maybe spend a few hours just reading the sequences?
Edit: It is possible that you are running into problems with inferential distance.
In my expirience, people rarely change they’re minds after their sure of something.
That matches my experience everywhere except Lw.
If you don’t like my opinion, fine, but lots of people would agree with me including many of those “political philosophers” you speak so highly of.
Again, I did not say I disagreed with you, or that people downvoted you because they disagreed with you. Rather, you’re making a strong political claim without stating any justification, and not actually contributing anything in the process.
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
There is strong evidence that the world is not flat. There is also strong evidence that the Christian God doesn’t exist, and in fact to an indifferent agent the (very algorithmically complex) hypothesis that the Christian God exists shouldn’t even be elevated to the level of attention.
By the same reason you were incorrect in your reply to AlephNeil, performing experiments can increase utility if what course of action is optimal is dependent on which hypothesis is most likely.
If your utility function’s goal is to get the most accurate hypothesis (not act on it) sure. Otherwise, why waste its time testing something that it already believes is true? If your goal is to get the highest “utility” as possible, then wasting time or resources, no matter how small, is inefficient. This means that your moving the blame off the bayesian end and to the “utility function”, but its still a problem.
If your utility function’s goal is to get the most accurate hypothesis (not act on it) sure. Otherwise, why waste its time testing something that it already believes is true? If your goal is to get the highest “utility” as possible, then wasting time or resources, no matter how small, is inefficient. This means that your moving the blame off the bayesian end and to the “utility function”, but its still a problem.
But you don’t believe it is true; there’s some probability associated with it. Consider for example, the following situation. Your friend rolls a standard pair of 6 sided dice without you seeing them. If you guess the correct total you get $1000. Now, it is clear that your best guess is to guess 7 since that is the most common outcome. So you guess 7 and 1/6th of the time you get it right.
Now, suppose you have the slightly different game where before you make your guess, you may pay your friend $1 and the friend will tell you the lowest number that appeared. You seem to think that for some reason a Bayesian wouldn’t do this because they already know that 7 is most likely. But of course they would, because paying the $1 increases their expected pay-off.
In general, increasing the accuracy of your map of the universe is likely to increase your utility. Sometimes it isn’t, and so we don’t bother. Neither a Bayesian rationalist nor a traditional rationalist is going to try to say count all the bricks on the facade of their apartment building even though it increases the accuracy of their model. Because this isn’t an interesting piece of the model that is at all likely to tell anything useful compared to other limited forms. If one was an immortal and really running low on things to do, maybe counting that would be a high priority.
Allright, consider a situation where there is a very very small probability that something will work, but it gives infinite utility (or at least extrordinarily large.) The risk for doing it is also really high, but because it is finite, the bayesian utility function will evaluate it as acceptable because of the infinite reward involved. On paper, this works out. If you do it enough times, you succeed and after you subtract the total cost from all those other times, you still have infinity. But in practice most people consider this a very bad course of action. The risk can be very high, perhaps your life, so even the traditional rationalist would avoid doing this. Do you see where the problem is? It’s the fact that you only get a finite number of tries in reality, but the bayesian utility function calculates it as though you did it an infinite number of times and gives you the net utility.
Yes, you aren’t the first person to make this observation. However, This isn’t a problem with Bayesianism so much as with utilitarianism giving counter-intuitive results when large numbers are involved. See for example Torture v. dust specks or Pascal’s Mugging. See especially Nyarlathotep’s Deal which is very close to the situation you are talking about and shows that the problem seem to more reside in utilitarianism than Bayesianism. It may very well be that human preferences are just inconsistent. But this issue has very little to do with Bayesianism.
This isn’t a problem with Bayesianism so much as with utilitarianism giving counter-intuitive results when large numbers are involved.
Counter-intuitive!? Thats a little more than just counter-intuitive. Immagine the CEV uses this function. Doctor Evil approaches it and says that an infinite number of humans will be sacrificed if it doesn’t let him rule the world. And there are a lot more realistic problems like that to. I think the problem comes from the fact that net utility of all possible worlds and actual utility are not the same thing. I don’t know how to do it better, but you might want to think twice before you use this to make trade offs.
Ah. It seemed like you hadn’t because rather than use the example there you used a very similar case. I don’t know a universal solution either. But it should be clear that the problem exists for non-Bayesians so the dilemma isn’t a problem with Bayesianism.
My guess at what’s going on here is that you’re intuitively modeling yourself as having a bounded utility function. In which case (letting N denote an upper bound on your utility), no gamble where the probability of the “good” outcome is less than −1/N times the utility of the “bad” outcome could ever be worth taking. Or, translated into plain English: there are some risks such that no reward could make them worth it—which, you’ll note, is a constraint on rewards.
That’s my question for you! I was attempting to explain the intuition that generated these remarks of yours:
The risk for doing it is also really high, but… the bayesian utility function will evaluate it as acceptable because of the [extraordinarily large] reward involved. On paper, this works out...But in practice most people consider this a very bad course of action
Otherwise, why waste its time testing something that it already believes is true?
Because it might be false. If your utility function requires you to collect green cheese, and so you want to make a plan to go to the moon to collect the green cheese, you should know how much you’ll have to spend getting to the moon, and what the moon is actually made of. And so it is written, “If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.”
You try to maximize your expected utility. Perhaps having done your calculations, you think that action X has a 5⁄6 chance of earning you £1 and a 1⁄6 chance of killing you (perhaps someone’s promised you £1 if you play Russian Roulette).
Presumably you don’t base your decision entirely on the most likely outcome.
So in this scenario you have to decide how much your life is worth in money. You can go home and not take any chance of dying or risk a 1⁄6 chance to earn X amount of money. Its an extension on the risk/reward problem basically, and you have to decide how much risk is worth in money before you can complete it. Thats a problem, because as far as I know, bayesianism doesn’t cover that.
It’s not the job of ‘Bayesianism’ to tell you what your utility function is.
This [by which I mean, “the question of where the agent’s utility function comes from”] doesn’t have anything to do with the question of whether Bayesian decision-making takes account of more than just the most probable hypothesis.
Only for the stated purpose of this website—to be “less wrong”! :) Quoting from Science Isn’t Strict Enough:
But the Way of Bayes is also much harder to use than Science. It puts a tremendous strain on your ability to hear tiny false notes, where Science only demands that you notice an anvil dropped on your head.
In Science you can make a mistake or two, and another experiment will come by and correct you; at worst you waste a couple of decades.
But if you try to use Bayes even qualitatively—if you try to do the thing that Science doesn’t trust you to do, and reason rationally in the absence of overwhelming evidence—it is like math, in that a single error in a hundred steps can carry you anywhere. It demands lightness, evenness, precision, perfectionism.
There’s a good reason why Science doesn’t trust scientists to do this sort of thing, and asks for further experimental proof even after someone claims they’ve worked out the right answer based on hints and logic.
But if you would rather not waste ten years trying to prove the wrong theory, you’ll need to essay the vastly more difficult problem: listening to evidence that doesn’t shout in your ear.
As for the rest of your comment: I completely agree! That was actually the explanation that the OP, komponisto, gave to me to get Bayesianism (edit: I actually mean “the idea that probability theory can be used to override your intuitions and get to correct answers”) to “click” for me (insofar as it has “clicked”). But the way that it’s represented in the post is really helpful, I think, because it eliminates even the need to imagine that there are more doors; it addresses the specifics of that actual problem, and you can’t argue with the numbers!
I don’t get it really. I mean, I get the method, but not the formula. Is this useful for anything though?
Quite a bit! (A quick Google Scholar search turns up about 1500 papers on methods and applications, and there are surely more.)
The formula tells you how to change your strength of belief in a hypothesis in response to evidence (this is ‘Bayesian updating’, sometimes shortened to just ‘updating’). Because the formula is a trivial consequence of the definition of a conditional probability, it holds in any situation where you can quantify the evidence and the strength of your beliefs as probabilities. This is why many of the people on this website treat it as the foundation of reasoning from evidence; the formula is very general.
Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem page goes into this in more detail and at a slower pace. It has a few nice Java applets that you can use to play with some of the ideas with specific examples, too.
Also, a simpler method of explaining the Monty Hall problem is to think of it if there were more doors. Lets say there were a million (thats alot [“a lot” grammar nazis] of goats.) You pick one and the host elliminates every other door except one. The probability you picked the right door is one in a million, but he had to make sure that the door he left unopened was the one that had the car in it, unless you picked the one with a car in it, which is a one in a million chance.
That’s awesome. I shall use it in the future. Wish I could multi upvote.
The way I like to think of the Monty Hall problem is like this… if you had the choice of picking either one of the three doors or two of the three doors (if the car is behind either, you win it), you would obviously pick two of the doors to give yourself a 2⁄3 chance of winning. Similarly, if you had picked your original door and then Monty asked if you’d trade your one door for the other two doors (all sight unseen), it would again be obvious that you should make the trade. Now… when you make that trade, you know that at least one of the doors you’re getting in trade has a goat behind it (there’s only one car, you have two doors, so you have to have at least one goat). So, given that knowledge and the certainty that trading one door for two is the right move (statistically), would seeing the goat behind one of the doors you’re trading for before you make the trade change the wisdom of the trade? You KNOW that you’re getting at least one goat in either case. Most people who I’ve explained it to in this way seem to see that making the trade still makes sense (and is equivalent to making the trade in the original scenario).
I think the struggle is that people tend to dismiss the existance of the 3rd door once they see what’s behind it. It sort of drops out of the picture as a resolved thing and then the mind erroneously reformulates the situation with just the two remaining doors. The scary thing is that people are generally quite easily manipulated with these sorts of puzzles and there are plenty of circumstances (DNA evidence given during jury trials comes to mind) when the probabilities being presented are wildly misleading as the result of erroneously eliminating segments of the problem space because they are “known”.
I don’t get it really. I mean, I get the method, but not the formula. Is this useful for anything though?
There’s a significant population of people—disproportionately represented here—who consider Bayesian reasoning to be theoretically superior to the ad hoc methods habitually used. An introductory essay on the subject that many people here read and agreed with A Technical Explanation of Technical Explanation.
I don’t get it really. I mean, I get the method, but not the formula. Is this useful for anything though?
Also, a simpler method of explaining the Monty Hall problem is to think of it if there were more doors. Lets say there were a million (thats alot [“a lot” grammar nazis] of goats.) You pick one and the host elliminates every other door except one. The probability you picked the right door is one in a million, but he had to make sure that the door he left unopened was the one that had the car in it, unless you picked the one with a car in it, which is a one in a million chance.
It might help to read the sequences, or just read Jaynes. In particular, one of the central ideas of the LW approach to rationality is that when one encounters new evidence one should update one’s belief structure based on this new evidence and your estimates using Bayes’ theorem. Roughly speaking, this is in contrast to what is sometimes described as “traditional rationalism” which doesn’t emphasize updating on each piece of evidence but rather on updating after one has a lot of clearly relevant evidence.
Edit: Recommendation of Map-Territory sequence seems incorrect. Which sequence is the one to recommend here?
How to Actually Change your Mind and Mysterious Answers to Mysterious Questions
Updating your belief based on different pieces of evidence is useful, but (and its a big but) just believing strange things based on imcomplete evidence is bad. Also, this neglects the fact of time. If you had an infinite amount of time to analyze every possible scenario, you could get away with this, but otherwise you have to just make quick assumptions. Then, instead of testing wether these assumptions are correct, you just go with them wherever it takes you. If only you could “learn how to learn” and use the Bayesian method on different methods of learning; eg, test out different heuristics and see which ones give the best results. In the end, you find humans already do this to some extent and “traditional rationalism” and science is based off of the end result of this method. Is this making any sense? Sure, its useful in some abstract sense and on various math problems, but you can’t program a computer this way, nor can you live your life trying to compute statistics like this in your head.
Other than that, I can see different places where this would be useful.
And so it is written, “Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.”
I may not be the best person to reply to this given that I a) am much closer to being a traditional rationalist than a Bayesian and b) believe that the distinction between Bayesian rationalism and traditional rationalism is often exaggerated. I’ll try to do my best.
So how do you tell if a belief is strange? Presumably if the evidence points in one direction, one shouldn’t regard that belief as strange. Can you give an example of a belief that should considered not a good belief to have due to strangeness that one could plausibly have a Bayesian accept like this?
Well yes, and no. The Bayesian starts with some set of prior probability estimates, general heuristics about how the world seems to operate (reductionism and locality would probably be high up on the list). Everyone has to deal with the limits on time and other resources. That’s why for example, if someone claims that hopping on one foot cures colon cancer we don’t generally bother testing it. That’s true for both the Bayesian and the traditionalist.
I’m curious as to why you claim that you can’t program a computer this way. For example, automatic Bayesian curve fitting has been around for almost 20 years and is a useful machine learning mechanism. Sure, it is much more narrow than applying Bayesianism to understanding reality as a whole, but until we crack the general AI problem, it isn’t clear to me how you can be sure that that’s a fault of the Bayesian end and not the AI end. If we can understand how to make general intelligences I see no immediate reason why one couldn’t make them be good Bayesians.
I agree that in general, trying to generally compute statistics in one’s head is difficult. But I don’t see why that rules out doing it for the important things. No one is claiming to be a perfect Bayesian. I don’t think for example that any Bayesian when walking into a building tries to estimate the probability that the building will immediately collapse. Maybe they do if the building is very rickety looking, but otherwise they just think of it as so tiny as to not bother examining. But Bayesian updating is a useful way of thinking about many classes of scientific issues, as well as general life issues (estimates for how long it will take to get somewhere, estimates of how many people will attend a party based on the number invited and the number who RSVPed for example both can be thought of in somewhat Bayesian manners). Moreover, forcing oneself to do a Bayesian calculation can help bring into the light many estimates and premises that were otherwise hiding behind vagueness or implicit structures.
Guessing here you mean locality instead of nonlocality?
Yes, fixed thank you.
Well for example, if you have a situation where the evidence leads you to believe that something is true, and there is an easy, simple, reliable test to prove its not true, why would the bayesian method waste its time? Immagine you witness something which could be possible, but its extremely odd. Like gravity not working or something. It could be a hallucinations, or a glitch if your talking about a computer, and there might be an easy way to prove it is or isn’t. Under either scenerio, whether its a hallucination or reality is just weird, it makes an assumption and then has no reason to prove whether this is correct. Actually, that might have been a bad example, but pretty much every scenario you can think of, where making an assumption can be a bad thing and you can test the assumptions, would work.
Well if you can’t program a viable AI out of it, then its not a universal truth to rationality. Sure, you might be able to use if its complimented and powered by other mechanisms, but then its not a unvirsal truth, is it. That was my point. If it is an important tool, then I have no doubt that once we make AI, it will discover it itself, or may even have it in its original program.
Firstly, priors are important; if something has a low prior probability, it’s not generally going to get to a high probability quickly. Secondly, not all evidence has the same strength. Remember in particular that the strength of evidence is measured by the likelihood ratio. If you see something that could likely be caused by hallucinations, that isn’t necessarily very strong evidence for it; but hallucinations are not totally arbitrary, IINM. Still, if you witness objects spontaneously floating off the ground, even if you know this is an unlikely hallucination, the prior for some sort of gravity failure will be so low that the posterior will probably still be very low. Not that those are the only two alternatives, of course.
If there is an “easy, simple, reliable test” to determine the claim’s truth within a high confidence, why do you think a Bayesian wouldn’t make that test?
Can you expand your logic for this? In particular, it seems like you are using a definition of “universal truth to rationality” which needs to be expanded out.
Because its not a decision making theory, but a one that judges probability. The bayesian method will examine what it has, and decide the probability of different situations. Other then that, it doesn’t actually do anything. It takes an entirely different system to actually act on the information given. If it is a simple system and just assumes to be correct whichever one has the highest probability, then it isn’t going to bother testing it.
But a Bayesian won’t assume which one has the highest probability is correct. That’s the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn’t going to lie to itself and say that the most probable claim now has a probability of 1. That’s not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn’t do that. I don’t know where you are getting this notion of a “simple system” that did that. If it did, it wouldn’t be a Bayesian.
I’m not exactly sure what you mean by all of this. How does a bayesian system make decisions if not by just going on its most probable hypothesis?
To make decisions, you combine probability estimates of outcomes with a utility function, and maximize expected utility. A possibility with very low probability may nevertheless change a decision, if that possibility has a large enough effect on utility.
See the reply I made to AlephNeil. Also, this still doesn’t change my scenario. If theres a way to test a hypothesis, I see no reason the bayesian method ever would, even if it seems like common sense to look before you leap.
Anyone know why I can only post comments every 8 minutes? Is the bandwidth really that bad?
Bayesianism is only a predictor; it gets you from prior probabilities plus evidence to posterior probabilities. You can use it to evaluate the likelihood of statements about the outcomes of actions, but it will only ever give you probabilities, not normative statements about what you should or shouldn’t do, or what you should or shouldn’t test. To answer those questions, you need to add a decision theory, which lets you reason from a utility function plus a predictor to a strategy, and a utility function, which takes a description of an outcome and assigns a score indicating how much you like it.
The rate-limit on posting isn’t because of bandwidth, it’s to defend against spammers who might otherwise try to use scripts to post on every thread at once. I believe it goes away with karma, but I don’t know what the threshold is.
You face limits on your rate of posting if you’re at or below 0 karma, which seems to be the case for you. How you got modded down so much, I’m not so sure of.
Bold, unjustified political claims. Bold, unjustified claims that go against consensus. Bad spelling/grammar. Also a Christian, but those comments don’t seem to be negative karma.
I can attest the being Christian itself does not seem to make a negative difference. :D
Upvoted. That took me a minute to get.
Yeah, I hadn’t been following Houshalter very closely, and the few that I did see weren’t about politics, and seemed at least somewhat reasonable. (Maybe I should have checked the posting history, but I was just saying I’m not sure, not that the opposite would be preferable.)
What bold unjustified political claims? You do realise that every other person on this site I’ve met so far has some kind of extreme political view. I thought I was kind of reasonable.
In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just “go along with it”.
What’s wrong with my spelling/grammar? I double check everything before I post it!
You’re persecuting me because of my religion!?
Whatever. I’ll post again in 8 minutes I guess.
In this comment:
Whats → What’s
Your → You’re
Also, arguably a missing comma before “I guess”.
No. In other words, you’ve made claims that assume statements against consensus, often without even realizing it or giving any justification when you do so. As I already explained to you, the general approach at LW has been hashed out quite a bit. Some people (such as myself) disagree with a fair bit. For example, I’m much closer to being a traditional rationalist than a Bayesian rationalist and I also assign a very low probability to a Singularity-type event. But I’m aware enough to know when I’m operating under non-consensus views so I’m careful to be explicit about what those views are and if necessary, note why I have them. I’m not the only such example. Alicorn for example (who also replied to this post) has views on morality that are a distinct minority in LW, but Alicorn is careful whenever these come up to reason carefully and make her premises explicit. Thus, the comments are far more likely to be voted up than down.
Well, for the people complaining about grammar: “Your” → “You’re”
But no, you’ve only mentioned your religious views twice I think, and once in passing. The votes down there were I’m pretty sure because your personal religious viewpoint was utterly besides the point being made about the general LW consensus.
Emphasis on ‘unjustified’. Example. This sounds awfully flippant and sure of yourself—“This system wouldn’t work at all”. Why do you suppose so many people, including professional political scientists / political philosophers / philosophers of law think that it would work? Do you have an amazing insight that they’re all missing? Sure, there are people with many different positions on this issue, but unless you’re actually going to join the debate and give solid reasons, you weren’t really contributing anything with this comment.
Also, comments on political issues are discouraged, as politics is the mind-killer. Unless you’re really sure your political comment is appropriate, hold off on posting it. And if you’re really sure your political comment is too important not to post, you should check to make sure you’re being rational, as that’s a good sign you’re not.
Again, emphasis on ‘unjustified’. If people here believe something, there are usually very good reasons for it. Going against that without at least attempting a justification is not recommended. Here are hundreds of people who have spent years trying to understand how to, in general, be correct about things, and they have managed to reach agreement on some issues. You should be shaken by that, unless you know precisely where they’ve all gone wrong, and in that case you should say so. If you’re right, they’ll all change their minds.
You’ve indicated you have false beliefs. That is a point against you. Also if you think the world is flat, the moon is made of green cheese, or 2+2=3, and don’t manage to fix that when someone tells you you’re wrong, rationalists will have a lower opinion of you. If you manage to convince them that 2+2=3, then you win back more points than you’ve lost, but it’s probably not worth the try.
Because they don’t!? I was talking about how the FDA is right, the “wouldn’t work at all” is an unregulated drug industry. If you don’t like my opinion, fine, but lots of people would agree with me including many of those “political philosophers” you speak so highly of.
In my expirience, people rarely change they’re minds after their sure of something. Thats not to say it doesn’t happen, otherwise why would I try. The point of argument is to try to get both people on the same ground, then they can both choose for themselves which is right, even if they don’t publicly admit “defeat”.
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
Clicky
The extremely low prior probability and the total lack of evidence allow us, as Bayesians, to dismiss it as false. Taboo the word “proof”, because it’s not useful to us in this context.
Speaking as someone who thinks that the general outline of your point in that thread is the correct conclusion, the problem is you gave zero evidence or logic for why you would be correct. Suppose someone says “Hey we do things like X right now, but what if we did Y instead?” You can’t just respond “Y won’t work.” If you say “Y won’t work because of problems A,B, C” or “X works better than Y because of problems D,E,F” then you’ve got a discussion going. But otherwise, all you have is someone shouting “is not”/”is too.”
If we’re talking about the religion matter again, which it seems we are, weren’t you already linked to the Mysterious Answers sequence? And I’m pretty sure you were explicitly given this post. Maybe instead of waiting 8 minutes to post between that time read some of the things people have asked you to read? Or maybe spend a few hours just reading the sequences?
Edit: It is possible that you are running into problems with inferential distance.
That matches my experience everywhere except Lw.
Again, I did not say I disagreed with you, or that people downvoted you because they disagreed with you. Rather, you’re making a strong political claim without stating any justification, and not actually contributing anything in the process.
There is strong evidence that the world is not flat. There is also strong evidence that the Christian God doesn’t exist, and in fact to an indifferent agent the (very algorithmically complex) hypothesis that the Christian God exists shouldn’t even be elevated to the level of attention.
False—division by zero. You may want to see How to Convince Me 2+2=3.
I’m guessing that confusing “too” and “to”, and “its” and “it’s”, contributed.
By the same reason you were incorrect in your reply to AlephNeil, performing experiments can increase utility if what course of action is optimal is dependent on which hypothesis is most likely.
If your utility function’s goal is to get the most accurate hypothesis (not act on it) sure. Otherwise, why waste its time testing something that it already believes is true? If your goal is to get the highest “utility” as possible, then wasting time or resources, no matter how small, is inefficient. This means that your moving the blame off the bayesian end and to the “utility function”, but its still a problem.
But you don’t believe it is true; there’s some probability associated with it. Consider for example, the following situation. Your friend rolls a standard pair of 6 sided dice without you seeing them. If you guess the correct total you get $1000. Now, it is clear that your best guess is to guess 7 since that is the most common outcome. So you guess 7 and 1/6th of the time you get it right.
Now, suppose you have the slightly different game where before you make your guess, you may pay your friend $1 and the friend will tell you the lowest number that appeared. You seem to think that for some reason a Bayesian wouldn’t do this because they already know that 7 is most likely. But of course they would, because paying the $1 increases their expected pay-off.
In general, increasing the accuracy of your map of the universe is likely to increase your utility. Sometimes it isn’t, and so we don’t bother. Neither a Bayesian rationalist nor a traditional rationalist is going to try to say count all the bricks on the facade of their apartment building even though it increases the accuracy of their model. Because this isn’t an interesting piece of the model that is at all likely to tell anything useful compared to other limited forms. If one was an immortal and really running low on things to do, maybe counting that would be a high priority.
Allright, consider a situation where there is a very very small probability that something will work, but it gives infinite utility (or at least extrordinarily large.) The risk for doing it is also really high, but because it is finite, the bayesian utility function will evaluate it as acceptable because of the infinite reward involved. On paper, this works out. If you do it enough times, you succeed and after you subtract the total cost from all those other times, you still have infinity. But in practice most people consider this a very bad course of action. The risk can be very high, perhaps your life, so even the traditional rationalist would avoid doing this. Do you see where the problem is? It’s the fact that you only get a finite number of tries in reality, but the bayesian utility function calculates it as though you did it an infinite number of times and gives you the net utility.
Yes, you aren’t the first person to make this observation. However, This isn’t a problem with Bayesianism so much as with utilitarianism giving counter-intuitive results when large numbers are involved. See for example Torture v. dust specks or Pascal’s Mugging. See especially Nyarlathotep’s Deal which is very close to the situation you are talking about and shows that the problem seem to more reside in utilitarianism than Bayesianism. It may very well be that human preferences are just inconsistent. But this issue has very little to do with Bayesianism.
Counter-intuitive!? Thats a little more than just counter-intuitive. Immagine the CEV uses this function. Doctor Evil approaches it and says that an infinite number of humans will be sacrificed if it doesn’t let him rule the world. And there are a lot more realistic problems like that to. I think the problem comes from the fact that net utility of all possible worlds and actual utility are not the same thing. I don’t know how to do it better, but you might want to think twice before you use this to make trade offs.
It would help if you read the links people give you. The situation you’ve named is essentially that in Pascal’s Mugging.
Actually I did. Thats where I got it (after you linked it). And after reading all of that, I still can’t find a universal solution to this problem.
Ah. It seemed like you hadn’t because rather than use the example there you used a very similar case. I don’t know a universal solution either. But it should be clear that the problem exists for non-Bayesians so the dilemma isn’t a problem with Bayesianism.
My guess at what’s going on here is that you’re intuitively modeling yourself as having a bounded utility function. In which case (letting N denote an upper bound on your utility), no gamble where the probability of the “good” outcome is less than −1/N times the utility of the “bad” outcome could ever be worth taking. Or, translated into plain English: there are some risks such that no reward could make them worth it—which, you’ll note, is a constraint on rewards.
I’m not sure I understand. Why put a constraint on the reward, and even if you do, why pick some arbitrary value?
That’s my question for you! I was attempting to explain the intuition that generated these remarks of yours:
Because it might be false. If your utility function requires you to collect green cheese, and so you want to make a plan to go to the moon to collect the green cheese, you should know how much you’ll have to spend getting to the moon, and what the moon is actually made of. And so it is written, “If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.”
You try to maximize your expected utility. Perhaps having done your calculations, you think that action X has a 5⁄6 chance of earning you £1 and a 1⁄6 chance of killing you (perhaps someone’s promised you £1 if you play Russian Roulette).
Presumably you don’t base your decision entirely on the most likely outcome.
So in this scenario you have to decide how much your life is worth in money. You can go home and not take any chance of dying or risk a 1⁄6 chance to earn X amount of money. Its an extension on the risk/reward problem basically, and you have to decide how much risk is worth in money before you can complete it. Thats a problem, because as far as I know, bayesianism doesn’t cover that.
It’s not the job of ‘Bayesianism’ to tell you what your utility function is.
This [by which I mean, “the question of where the agent’s utility function comes from”] doesn’t have anything to do with the question of whether Bayesian decision-making takes account of more than just the most probable hypothesis.
Only for the stated purpose of this website—to be “less wrong”! :) Quoting from Science Isn’t Strict Enough:
As for the rest of your comment: I completely agree! That was actually the explanation that the OP, komponisto, gave to me to get Bayesianism (edit: I actually mean “the idea that probability theory can be used to override your intuitions and get to correct answers”) to “click” for me (insofar as it has “clicked”). But the way that it’s represented in the post is really helpful, I think, because it eliminates even the need to imagine that there are more doors; it addresses the specifics of that actual problem, and you can’t argue with the numbers!
Quite a bit! (A quick Google Scholar search turns up about 1500 papers on methods and applications, and there are surely more.)
The formula tells you how to change your strength of belief in a hypothesis in response to evidence (this is ‘Bayesian updating’, sometimes shortened to just ‘updating’). Because the formula is a trivial consequence of the definition of a conditional probability, it holds in any situation where you can quantify the evidence and the strength of your beliefs as probabilities. This is why many of the people on this website treat it as the foundation of reasoning from evidence; the formula is very general.
Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem page goes into this in more detail and at a slower pace. It has a few nice Java applets that you can use to play with some of the ideas with specific examples, too.
That’s awesome. I shall use it in the future. Wish I could multi upvote.
The way I like to think of the Monty Hall problem is like this… if you had the choice of picking either one of the three doors or two of the three doors (if the car is behind either, you win it), you would obviously pick two of the doors to give yourself a 2⁄3 chance of winning. Similarly, if you had picked your original door and then Monty asked if you’d trade your one door for the other two doors (all sight unseen), it would again be obvious that you should make the trade. Now… when you make that trade, you know that at least one of the doors you’re getting in trade has a goat behind it (there’s only one car, you have two doors, so you have to have at least one goat). So, given that knowledge and the certainty that trading one door for two is the right move (statistically), would seeing the goat behind one of the doors you’re trading for before you make the trade change the wisdom of the trade? You KNOW that you’re getting at least one goat in either case. Most people who I’ve explained it to in this way seem to see that making the trade still makes sense (and is equivalent to making the trade in the original scenario).
I think the struggle is that people tend to dismiss the existance of the 3rd door once they see what’s behind it. It sort of drops out of the picture as a resolved thing and then the mind erroneously reformulates the situation with just the two remaining doors. The scary thing is that people are generally quite easily manipulated with these sorts of puzzles and there are plenty of circumstances (DNA evidence given during jury trials comes to mind) when the probabilities being presented are wildly misleading as the result of erroneously eliminating segments of the problem space because they are “known”.
There’s a significant population of people—disproportionately represented here—who consider Bayesian reasoning to be theoretically superior to the ad hoc methods habitually used. An introductory essay on the subject that many people here read and agreed with A Technical Explanation of Technical Explanation.
One more application of Bayes I should have mentioned: Aumann’s Agreement Theorem.