To make decisions, you combine probability estimates of outcomes with a utility function, and maximize expected utility. A possibility with very low probability may nevertheless change a decision, if that possibility has a large enough effect on utility.
See the reply I made to AlephNeil. Also, this still doesn’t change my scenario. If theres a way to test a hypothesis, I see no reason the bayesian method ever would, even if it seems like common sense to look before you leap.
Anyone know why I can only post comments every 8 minutes? Is the bandwidth really that bad?
Bayesianism is only a predictor; it gets you from prior probabilities plus evidence to posterior probabilities. You can use it to evaluate the likelihood of statements about the outcomes of actions, but it will only ever give you probabilities, not normative statements about what you should or shouldn’t do, or what you should or shouldn’t test. To answer those questions, you need to add a decision theory, which lets you reason from a utility function plus a predictor to a strategy, and a utility function, which takes a description of an outcome and assigns a score indicating how much you like it.
The rate-limit on posting isn’t because of bandwidth, it’s to defend against spammers who might otherwise try to use scripts to post on every thread at once. I believe it goes away with karma, but I don’t know what the threshold is.
Anyone know why I can only post comments every 8 minutes? Is the bandwidth really that bad?
You face limits on your rate of posting if you’re at or below 0 karma, which seems to be the case for you. How you got modded down so much, I’m not so sure of.
How you got modded down so much, I’m not so sure of.
Bold, unjustified political claims. Bold, unjustified claims that go against consensus. Bad spelling/grammar. Also a Christian, but those comments don’t seem to be negative karma.
Yeah, I hadn’t been following Houshalter very closely, and the few that I did see weren’t about politics, and seemed at least somewhat reasonable. (Maybe I should have checked the posting history, but I was just saying I’m not sure, not that the opposite would be preferable.)
What bold unjustified political claims? You do realise that every other person on this site I’ve met so far has some kind of extreme political view. I thought I was kind of reasonable.
Bold, unjustified claims that go against consensus.
In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just “go along with it”.
Bad spelling/grammar.
What’s wrong with my spelling/grammar? I double check everything before I post it!
Bold, unjustified claims that go against consensus.
In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just “go along with it”.
No. In other words, you’ve made claims that assume statements against consensus, often without even realizing it or giving any justification when you do so. As I already explained to you, the general approach at LW has been hashed out quite a bit. Some people (such as myself) disagree with a fair bit. For example, I’m much closer to being a traditional rationalist than a Bayesian rationalist and I also assign a very low probability to a Singularity-type event. But I’m aware enough to know when I’m operating under non-consensus views so I’m careful to be explicit about what those views are and if necessary, note why I have them. I’m not the only such example. Alicorn for example (who also replied to this post) has views on morality that are a distinct minority in LW, but Alicorn is careful whenever these come up to reason carefully and make her premises explicit. Thus, the comments are far more likely to be voted up than down.
Your persecuting me because of my religion!?
Well, for the people complaining about grammar: “Your” → “You’re”
But no, you’ve only mentioned your religious views twice I think, and once in passing. The votes down there were I’m pretty sure because your personal religious viewpoint was utterly besides the point being made about the general LW consensus.
What bold unjustified political claims? You do realise that every other person on this site I’ve met so far has some kind of extreme political view. I thought I was kind of reasonable.
Emphasis on ‘unjustified’. Example. This sounds awfully flippant and sure of yourself—“This system wouldn’t work at all”. Why do you suppose so many people, including professional political scientists / political philosophers / philosophers of law think that it would work? Do you have an amazing insight that they’re all missing? Sure, there are people with many different positions on this issue, but unless you’re actually going to join the debate and give solid reasons, you weren’t really contributing anything with this comment.
Also, comments on political issues are discouraged, as politics is the mind-killer. Unless you’re really sure your political comment is appropriate, hold off on posting it. And if you’re really sure your political comment is too important not to post, you should check to make sure you’re being rational, as that’s a good sign you’re not.
In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just “go along with it”.
Again, emphasis on ‘unjustified’. If people here believe something, there are usually very good reasons for it. Going against that without at least attempting a justification is not recommended. Here are hundreds of people who have spent years trying to understand how to, in general, be correct about things, and they have managed to reach agreement on some issues. You should be shaken by that, unless you know precisely where they’ve all gone wrong, and in that case you should say so. If you’re right, they’ll all change their minds.
Also a Christian
Your[sic] persecuting me because of my religion!?
You’ve indicated you have false beliefs. That is a point against you. Also if you think the world is flat, the moon is made of green cheese, or 2+2=3, and don’t manage to fix that when someone tells you you’re wrong, rationalists will have a lower opinion of you. If you manage to convince them that 2+2=3, then you win back more points than you’ve lost, but it’s probably not worth the try.
Emphasis on ‘unjustified’. Example. This sounds awfully flippant and sure of yourself—“This system wouldn’t work at all”. Why do you suppose so many people, including professional political scientists / political philosophers / philosophers of law think that it would work?
Because they don’t!? I was talking about how the FDA is right, the “wouldn’t work at all” is an unregulated drug industry. If you don’t like my opinion, fine, but lots of people would agree with me including many of those “political philosophers” you speak so highly of.
If you’re right, they’ll all change their minds.
In my expirience, people rarely change they’re minds after their sure of something. Thats not to say it doesn’t happen, otherwise why would I try. The point of argument is to try to get both people on the same ground, then they can both choose for themselves which is right, even if they don’t publicly admit “defeat”.
You’ve indicated you have false beliefs.
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
If you manage to convince them that 2+2=3, then you win back more points than you’ve lost, but it’s probably not worth the try.
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
The extremely low prior probability and the total lack of evidence allow us, as Bayesians, to dismiss it as false. Taboo the word “proof”, because it’s not useful to us in this context.
Because they don’t!? I was talking about how the FDA is right, the “wouldn’t work at all” is an unregulated drug industry. If you don’t like my opinion, fine, but lots of people would agree with me including many of those “political philosophers” you speak so highly of.
Speaking as someone who thinks that the general outline of your point in that thread is the correct conclusion, the problem is you gave zero evidence or logic for why you would be correct. Suppose someone says “Hey we do things like X right now, but what if we did Y instead?” You can’t just respond “Y won’t work.” If you say “Y won’t work because of problems A,B, C” or “X works better than Y because of problems D,E,F” then you’ve got a discussion going. But otherwise, all you have is someone shouting “is not”/”is too.”
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
If we’re talking about the religion matter again, which it seems we are, weren’t you already linked to the Mysterious Answers sequence? And I’m pretty sure you were explicitly given this post. Maybe instead of waiting 8 minutes to post between that time read some of the things people have asked you to read? Or maybe spend a few hours just reading the sequences?
Edit: It is possible that you are running into problems with inferential distance.
In my expirience, people rarely change they’re minds after their sure of something.
That matches my experience everywhere except Lw.
If you don’t like my opinion, fine, but lots of people would agree with me including many of those “political philosophers” you speak so highly of.
Again, I did not say I disagreed with you, or that people downvoted you because they disagreed with you. Rather, you’re making a strong political claim without stating any justification, and not actually contributing anything in the process.
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
There is strong evidence that the world is not flat. There is also strong evidence that the Christian God doesn’t exist, and in fact to an indifferent agent the (very algorithmically complex) hypothesis that the Christian God exists shouldn’t even be elevated to the level of attention.
By the same reason you were incorrect in your reply to AlephNeil, performing experiments can increase utility if what course of action is optimal is dependent on which hypothesis is most likely.
If your utility function’s goal is to get the most accurate hypothesis (not act on it) sure. Otherwise, why waste its time testing something that it already believes is true? If your goal is to get the highest “utility” as possible, then wasting time or resources, no matter how small, is inefficient. This means that your moving the blame off the bayesian end and to the “utility function”, but its still a problem.
If your utility function’s goal is to get the most accurate hypothesis (not act on it) sure. Otherwise, why waste its time testing something that it already believes is true? If your goal is to get the highest “utility” as possible, then wasting time or resources, no matter how small, is inefficient. This means that your moving the blame off the bayesian end and to the “utility function”, but its still a problem.
But you don’t believe it is true; there’s some probability associated with it. Consider for example, the following situation. Your friend rolls a standard pair of 6 sided dice without you seeing them. If you guess the correct total you get $1000. Now, it is clear that your best guess is to guess 7 since that is the most common outcome. So you guess 7 and 1/6th of the time you get it right.
Now, suppose you have the slightly different game where before you make your guess, you may pay your friend $1 and the friend will tell you the lowest number that appeared. You seem to think that for some reason a Bayesian wouldn’t do this because they already know that 7 is most likely. But of course they would, because paying the $1 increases their expected pay-off.
In general, increasing the accuracy of your map of the universe is likely to increase your utility. Sometimes it isn’t, and so we don’t bother. Neither a Bayesian rationalist nor a traditional rationalist is going to try to say count all the bricks on the facade of their apartment building even though it increases the accuracy of their model. Because this isn’t an interesting piece of the model that is at all likely to tell anything useful compared to other limited forms. If one was an immortal and really running low on things to do, maybe counting that would be a high priority.
Allright, consider a situation where there is a very very small probability that something will work, but it gives infinite utility (or at least extrordinarily large.) The risk for doing it is also really high, but because it is finite, the bayesian utility function will evaluate it as acceptable because of the infinite reward involved. On paper, this works out. If you do it enough times, you succeed and after you subtract the total cost from all those other times, you still have infinity. But in practice most people consider this a very bad course of action. The risk can be very high, perhaps your life, so even the traditional rationalist would avoid doing this. Do you see where the problem is? It’s the fact that you only get a finite number of tries in reality, but the bayesian utility function calculates it as though you did it an infinite number of times and gives you the net utility.
Yes, you aren’t the first person to make this observation. However, This isn’t a problem with Bayesianism so much as with utilitarianism giving counter-intuitive results when large numbers are involved. See for example Torture v. dust specks or Pascal’s Mugging. See especially Nyarlathotep’s Deal which is very close to the situation you are talking about and shows that the problem seem to more reside in utilitarianism than Bayesianism. It may very well be that human preferences are just inconsistent. But this issue has very little to do with Bayesianism.
This isn’t a problem with Bayesianism so much as with utilitarianism giving counter-intuitive results when large numbers are involved.
Counter-intuitive!? Thats a little more than just counter-intuitive. Immagine the CEV uses this function. Doctor Evil approaches it and says that an infinite number of humans will be sacrificed if it doesn’t let him rule the world. And there are a lot more realistic problems like that to. I think the problem comes from the fact that net utility of all possible worlds and actual utility are not the same thing. I don’t know how to do it better, but you might want to think twice before you use this to make trade offs.
Ah. It seemed like you hadn’t because rather than use the example there you used a very similar case. I don’t know a universal solution either. But it should be clear that the problem exists for non-Bayesians so the dilemma isn’t a problem with Bayesianism.
My guess at what’s going on here is that you’re intuitively modeling yourself as having a bounded utility function. In which case (letting N denote an upper bound on your utility), no gamble where the probability of the “good” outcome is less than −1/N times the utility of the “bad” outcome could ever be worth taking. Or, translated into plain English: there are some risks such that no reward could make them worth it—which, you’ll note, is a constraint on rewards.
That’s my question for you! I was attempting to explain the intuition that generated these remarks of yours:
The risk for doing it is also really high, but… the bayesian utility function will evaluate it as acceptable because of the [extraordinarily large] reward involved. On paper, this works out...But in practice most people consider this a very bad course of action
Otherwise, why waste its time testing something that it already believes is true?
Because it might be false. If your utility function requires you to collect green cheese, and so you want to make a plan to go to the moon to collect the green cheese, you should know how much you’ll have to spend getting to the moon, and what the moon is actually made of. And so it is written, “If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.”
To make decisions, you combine probability estimates of outcomes with a utility function, and maximize expected utility. A possibility with very low probability may nevertheless change a decision, if that possibility has a large enough effect on utility.
See the reply I made to AlephNeil. Also, this still doesn’t change my scenario. If theres a way to test a hypothesis, I see no reason the bayesian method ever would, even if it seems like common sense to look before you leap.
Anyone know why I can only post comments every 8 minutes? Is the bandwidth really that bad?
Bayesianism is only a predictor; it gets you from prior probabilities plus evidence to posterior probabilities. You can use it to evaluate the likelihood of statements about the outcomes of actions, but it will only ever give you probabilities, not normative statements about what you should or shouldn’t do, or what you should or shouldn’t test. To answer those questions, you need to add a decision theory, which lets you reason from a utility function plus a predictor to a strategy, and a utility function, which takes a description of an outcome and assigns a score indicating how much you like it.
The rate-limit on posting isn’t because of bandwidth, it’s to defend against spammers who might otherwise try to use scripts to post on every thread at once. I believe it goes away with karma, but I don’t know what the threshold is.
You face limits on your rate of posting if you’re at or below 0 karma, which seems to be the case for you. How you got modded down so much, I’m not so sure of.
Bold, unjustified political claims. Bold, unjustified claims that go against consensus. Bad spelling/grammar. Also a Christian, but those comments don’t seem to be negative karma.
I can attest the being Christian itself does not seem to make a negative difference. :D
Upvoted. That took me a minute to get.
Yeah, I hadn’t been following Houshalter very closely, and the few that I did see weren’t about politics, and seemed at least somewhat reasonable. (Maybe I should have checked the posting history, but I was just saying I’m not sure, not that the opposite would be preferable.)
What bold unjustified political claims? You do realise that every other person on this site I’ve met so far has some kind of extreme political view. I thought I was kind of reasonable.
In other words, I disagreed with you. I always look for the reasons to doubt something or believe in something else before I just “go along with it”.
What’s wrong with my spelling/grammar? I double check everything before I post it!
You’re persecuting me because of my religion!?
Whatever. I’ll post again in 8 minutes I guess.
In this comment:
Whats → What’s
Your → You’re
Also, arguably a missing comma before “I guess”.
No. In other words, you’ve made claims that assume statements against consensus, often without even realizing it or giving any justification when you do so. As I already explained to you, the general approach at LW has been hashed out quite a bit. Some people (such as myself) disagree with a fair bit. For example, I’m much closer to being a traditional rationalist than a Bayesian rationalist and I also assign a very low probability to a Singularity-type event. But I’m aware enough to know when I’m operating under non-consensus views so I’m careful to be explicit about what those views are and if necessary, note why I have them. I’m not the only such example. Alicorn for example (who also replied to this post) has views on morality that are a distinct minority in LW, but Alicorn is careful whenever these come up to reason carefully and make her premises explicit. Thus, the comments are far more likely to be voted up than down.
Well, for the people complaining about grammar: “Your” → “You’re”
But no, you’ve only mentioned your religious views twice I think, and once in passing. The votes down there were I’m pretty sure because your personal religious viewpoint was utterly besides the point being made about the general LW consensus.
Emphasis on ‘unjustified’. Example. This sounds awfully flippant and sure of yourself—“This system wouldn’t work at all”. Why do you suppose so many people, including professional political scientists / political philosophers / philosophers of law think that it would work? Do you have an amazing insight that they’re all missing? Sure, there are people with many different positions on this issue, but unless you’re actually going to join the debate and give solid reasons, you weren’t really contributing anything with this comment.
Also, comments on political issues are discouraged, as politics is the mind-killer. Unless you’re really sure your political comment is appropriate, hold off on posting it. And if you’re really sure your political comment is too important not to post, you should check to make sure you’re being rational, as that’s a good sign you’re not.
Again, emphasis on ‘unjustified’. If people here believe something, there are usually very good reasons for it. Going against that without at least attempting a justification is not recommended. Here are hundreds of people who have spent years trying to understand how to, in general, be correct about things, and they have managed to reach agreement on some issues. You should be shaken by that, unless you know precisely where they’ve all gone wrong, and in that case you should say so. If you’re right, they’ll all change their minds.
You’ve indicated you have false beliefs. That is a point against you. Also if you think the world is flat, the moon is made of green cheese, or 2+2=3, and don’t manage to fix that when someone tells you you’re wrong, rationalists will have a lower opinion of you. If you manage to convince them that 2+2=3, then you win back more points than you’ve lost, but it’s probably not worth the try.
Because they don’t!? I was talking about how the FDA is right, the “wouldn’t work at all” is an unregulated drug industry. If you don’t like my opinion, fine, but lots of people would agree with me including many of those “political philosophers” you speak so highly of.
In my expirience, people rarely change they’re minds after their sure of something. Thats not to say it doesn’t happen, otherwise why would I try. The point of argument is to try to get both people on the same ground, then they can both choose for themselves which is right, even if they don’t publicly admit “defeat”.
What if it’s not a false belief? It’s alot different from “2+2=3” or “the world is flat”. Why? Because you can prove those things correct or incorrect.
Clicky
The extremely low prior probability and the total lack of evidence allow us, as Bayesians, to dismiss it as false. Taboo the word “proof”, because it’s not useful to us in this context.
Speaking as someone who thinks that the general outline of your point in that thread is the correct conclusion, the problem is you gave zero evidence or logic for why you would be correct. Suppose someone says “Hey we do things like X right now, but what if we did Y instead?” You can’t just respond “Y won’t work.” If you say “Y won’t work because of problems A,B, C” or “X works better than Y because of problems D,E,F” then you’ve got a discussion going. But otherwise, all you have is someone shouting “is not”/”is too.”
If we’re talking about the religion matter again, which it seems we are, weren’t you already linked to the Mysterious Answers sequence? And I’m pretty sure you were explicitly given this post. Maybe instead of waiting 8 minutes to post between that time read some of the things people have asked you to read? Or maybe spend a few hours just reading the sequences?
Edit: It is possible that you are running into problems with inferential distance.
That matches my experience everywhere except Lw.
Again, I did not say I disagreed with you, or that people downvoted you because they disagreed with you. Rather, you’re making a strong political claim without stating any justification, and not actually contributing anything in the process.
There is strong evidence that the world is not flat. There is also strong evidence that the Christian God doesn’t exist, and in fact to an indifferent agent the (very algorithmically complex) hypothesis that the Christian God exists shouldn’t even be elevated to the level of attention.
False—division by zero. You may want to see How to Convince Me 2+2=3.
I’m guessing that confusing “too” and “to”, and “its” and “it’s”, contributed.
By the same reason you were incorrect in your reply to AlephNeil, performing experiments can increase utility if what course of action is optimal is dependent on which hypothesis is most likely.
If your utility function’s goal is to get the most accurate hypothesis (not act on it) sure. Otherwise, why waste its time testing something that it already believes is true? If your goal is to get the highest “utility” as possible, then wasting time or resources, no matter how small, is inefficient. This means that your moving the blame off the bayesian end and to the “utility function”, but its still a problem.
But you don’t believe it is true; there’s some probability associated with it. Consider for example, the following situation. Your friend rolls a standard pair of 6 sided dice without you seeing them. If you guess the correct total you get $1000. Now, it is clear that your best guess is to guess 7 since that is the most common outcome. So you guess 7 and 1/6th of the time you get it right.
Now, suppose you have the slightly different game where before you make your guess, you may pay your friend $1 and the friend will tell you the lowest number that appeared. You seem to think that for some reason a Bayesian wouldn’t do this because they already know that 7 is most likely. But of course they would, because paying the $1 increases their expected pay-off.
In general, increasing the accuracy of your map of the universe is likely to increase your utility. Sometimes it isn’t, and so we don’t bother. Neither a Bayesian rationalist nor a traditional rationalist is going to try to say count all the bricks on the facade of their apartment building even though it increases the accuracy of their model. Because this isn’t an interesting piece of the model that is at all likely to tell anything useful compared to other limited forms. If one was an immortal and really running low on things to do, maybe counting that would be a high priority.
Allright, consider a situation where there is a very very small probability that something will work, but it gives infinite utility (or at least extrordinarily large.) The risk for doing it is also really high, but because it is finite, the bayesian utility function will evaluate it as acceptable because of the infinite reward involved. On paper, this works out. If you do it enough times, you succeed and after you subtract the total cost from all those other times, you still have infinity. But in practice most people consider this a very bad course of action. The risk can be very high, perhaps your life, so even the traditional rationalist would avoid doing this. Do you see where the problem is? It’s the fact that you only get a finite number of tries in reality, but the bayesian utility function calculates it as though you did it an infinite number of times and gives you the net utility.
Yes, you aren’t the first person to make this observation. However, This isn’t a problem with Bayesianism so much as with utilitarianism giving counter-intuitive results when large numbers are involved. See for example Torture v. dust specks or Pascal’s Mugging. See especially Nyarlathotep’s Deal which is very close to the situation you are talking about and shows that the problem seem to more reside in utilitarianism than Bayesianism. It may very well be that human preferences are just inconsistent. But this issue has very little to do with Bayesianism.
Counter-intuitive!? Thats a little more than just counter-intuitive. Immagine the CEV uses this function. Doctor Evil approaches it and says that an infinite number of humans will be sacrificed if it doesn’t let him rule the world. And there are a lot more realistic problems like that to. I think the problem comes from the fact that net utility of all possible worlds and actual utility are not the same thing. I don’t know how to do it better, but you might want to think twice before you use this to make trade offs.
It would help if you read the links people give you. The situation you’ve named is essentially that in Pascal’s Mugging.
Actually I did. Thats where I got it (after you linked it). And after reading all of that, I still can’t find a universal solution to this problem.
Ah. It seemed like you hadn’t because rather than use the example there you used a very similar case. I don’t know a universal solution either. But it should be clear that the problem exists for non-Bayesians so the dilemma isn’t a problem with Bayesianism.
My guess at what’s going on here is that you’re intuitively modeling yourself as having a bounded utility function. In which case (letting N denote an upper bound on your utility), no gamble where the probability of the “good” outcome is less than −1/N times the utility of the “bad” outcome could ever be worth taking. Or, translated into plain English: there are some risks such that no reward could make them worth it—which, you’ll note, is a constraint on rewards.
I’m not sure I understand. Why put a constraint on the reward, and even if you do, why pick some arbitrary value?
That’s my question for you! I was attempting to explain the intuition that generated these remarks of yours:
Because it might be false. If your utility function requires you to collect green cheese, and so you want to make a plan to go to the moon to collect the green cheese, you should know how much you’ll have to spend getting to the moon, and what the moon is actually made of. And so it is written, “If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.”