For example:
“On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs.”
-Eliezer_Yudkowsky
AND
“Your actual probability starts out at 0.5, rises steadily as the clever arguer talks (starting with his very first point, because that excludes the possibility he has 0 points)”
-Eliezer_Yudkowsky
Appear to be contradictions, given that each point= a piece of evidence (shininess of box, presence of blue stamp, etc).
The cherry picking problem appears to be similar to the witch trial problem. In the latter any piece of evidence is interpreted to support the conclusion, while in the former evidence is only presented if it supports the conclusion.
You can’t expect your probabilities to on average be increased before seeing/hearing the evidence.
I think only if you do have a large background of knowledge, with a high probability that you are already aware of any given piece of evidence. But if you hear a repeat evidence, it simply shouldn’t alter your probabilities, rather than lower it.
I’m having a hard time coming up with a way to properly balance the equation.
The only thing I can think of is if you count the entire argument as one piece of evidence, and use a strategy like suggested by g for updating your priors based on the entire sum?
But you don’t necessarily listen to the entire argument.
Knowing about hypothetical cut off points below which they wont spend the time to present and explain evidence means with enough info you could still construct probabilities.
If time is limited, can you update with each single piece of evidence based on strength relative to expected?
What if you are unfamiliar with the properties of boxes and how they are related to likelihood of the presence of a diamond? Any guesstimates seem like they’d be well my abilities at least.
Unless I already know a lot, I have a hard time justifying updating my priors at all based on CA’s arguments.
If I do know a lot, I still can’t think of a way to justifiably not expect the probability to increase, which is a problem.
Help, ideas?
PS. Thankfully not everyone is a clever arguer. Ideally, scientists/teachers teaching you about evolution (for example) will not be selective in giving evidence. The evidence will simply be lopsided because of nature being lopsided in how it produces evidence (entangled with truth).
I don’t think one has to actually listen to a creationist, assuming it is known that the scientists/source material the teacher is drawing from are using good practice.
Also, this is my first post here, so if I am ignorant please let me know and direct me to how I can improve!
Someone claiming that they have evidence for a thing is already evidence for a thing, if you trust them at all, so you can update on that, and then revise that update on how good the evidence turns out to be once you actually get it.
For example, say gwern posts to Discussion that he has a new article on his website about some drug, and he says “tl;dr: It’s pretty awesome” but doesn’t give any details, and when you follow the link to the site you get an error and can’t see the page. gwern’s put together a few articles now about drugs, and they’re usually well-researched and impressive, so it’s pretty safe to assume that if he says a drug is awesome, it is, even if that’s the only evidence you have. This is a belief about both the drug (it is particularly effective at what it’s supposed to do) and what you’ll see when you’re able to access the page about it (there will be many citations of research indicating that the drug is particularly effective).
Now, say a couple days later you get the page to load, and what it actually says is “ha ha, April Fools!”. This is new information, and as such it changes your beliefs—in particular, your belief that the drug is any good goes down substantially, and any future cases of gwern posting about an ‘awesome’ drug don’t make you believe as strongly that the drug is good—the chance that it’s good if there is an actual page about it stays about the same, but now you also have to factor in the chance that it’s another prank—or in other words that the evidence you’ll be given will be much worse than is being claimed.
It’s harder to work out an example of evidence turning out to be much stronger than is claimed, but it works on the same principle—knowing that there’s evidence at all means you can update about as much as you would for an average piece of evidence from that source, and then when you learn that the evidence is much better, you update again based on how much better it is.
It’s harder to work out an example of evidence turning out to be much stronger than is claimed
Not particularly difficult, just posit a person who prior experience has taught you is particularly unreliable about assessing evidence. If they post a link arguing a position you already know they’re in favor of, you should assign a relatively low weight of evidence to the knowledge that they’ve linked to a resource arguing the position, but if you check it out and find that it’s actually well researched and reasoned, then you update upwards.
However, I think you misunderstood what I was attempting to say. I see I didn’t use the term “filtered evidence”, and am wondering if my comment showed up somewhere other than the article “what evidence filtered evidence”:
http://lesswrong.com/lw/jt/what_evidence_filtered_evidence/
Explaining how I got a response so quickly when commenting on a 5 year old article!
If so, my mistake as my comment was then completely misleading!
When the information does not come from a filtered source, I agree with you.
If I find out that there is evidence that will be in the up (or down) direction of a belief, this will modify my priors based on the degree of entanglement between the source and the matter of the belief.
After seeing the evidence then the probability assessment will on average remain the same; if it was weaker/stronger it will be lower/higher (or higher/lower, if evidence was downward) but it will of course not pass over the initial position before I heard of the news, unless of course it turns out to be evidence for the opposite direction.
What my question was about was filtered evidence.
Filtered evidence is a special case, in which that entanglement between the source and matter of the belief = 0.
Using Eliezer_Yudkowsky's example from "the bottom line":
http://lesswrong.com/lw/js/the_bottom_line/
The only entanglement between the claim of which sort of evidence I will be presented with, “box B contains the diamond!”, is with whether the owner of box A or box B bid higher (owner of box B, apparently).
Not with the actual content of those boxes (unless there is some relation between willingness to bid and things actually entangled with presence/absence of a diamond).
Therefore, being told of this will not alter my belief about whether or not box B is the one that contains the diamond.
This is the lead-in to the questions posed in my previous/first post.
Knowing that the source is filtered, every single piece of evidence he gives you will support the conclusion the diamond is in box B. Yet you simply cannot expect every single piece of evidence to on average increase your belief that it is in box B.
While the arguments are actually entangled, their selective presentation means P(E)=1 and P(~E)=0.
You can’t balance the equations other than by not modifying your beliefs at all with each piece of evidence.
Probability of the diamond being in box B = probability of the diamond being in box B given that you are shown evidence, meaning the evidence doesn’t matter.
The direction of the evidence that passes through the filter (the clever arguer) is entangled with which box-owner bid more money. Not with which box actually contains the diamond.
Thus, it sounds like you simply should not modify your beliefs when faced with a clever arguer who filters evidence. No entanglement between evidence’s direction and the truth/conservation of expected evidence.
My problem: not taking into account evidence entangled with reality doesn’t sit well with me. It sounds as though it should ultimately be taken into account, but I can’t immediately think of an effective process by which to do it.
Using drugs instead of boxes if that is an example you prefer: imagine a clever arguer hired by Merck to argue about what a great drug Rofecoxib is. The words “cardiovascular”, “stroke”, and “heart attack” wont ever come up.
With the help of selectively drawing from trials, a CA can paint a truly wonderful picture of the drug that has limited baring on reality.
Before seeing his evidence he tells you “Rofecoxib is wonderful!” This shouldn’t modify your belief, as it only tells you he is on Merck’s payroll.
Now how do you appropriately modify your belief on the drug’s quality and merits with the introduction of each piece of evidence this clever arguer presents to you?
Actually, it’s my bad—I found your comment via the new-comments list, and didn’t look very closely at its context.
As to your actual question: Being told that someone has evidence of something is, if they’re trustworthy, not just evidence of the thing, but also evidence of what other evidence exists. For example, in my scenario with gwern’s prank, before I’ve seen gwern’s web page, I expect that if I look the mentioned drug up in other places, I’ll also see evidence that it’s awesome. If I actually go look the drug up and find out that it’s no better than placebo in any situation, that’s also surprising new information that changes my beliefs—the same change that seeing gwern’s “April Fools” message would cause, in fact, so when I do see that message, it doesn’t surprise me or change my opinion of the drug.
In your scenario, I trust Merck’s spokesperson much less than I trust gwern, so I don’t end up with nearly so strong of a belief that third parties will agree that the drug is a good one—looking it up and finding out that it has dangerous side effects wouldn’t be surprising, so I should take the chance of that into account to begin with, even if the Merck spokesperson doesn’t mention it. This habit of keeping possible information from third parties (or information that could be discovered in other ways besides talking to third parties, but that the person you’re speaking to wouldn’t tell you even if they’d discovered it) into account when talking to untrustworthy people is the intended lesson of the original post.
I’m trying to incorporate this with conservation of expected evidence: http://lesswrong.com/lw/ii/conservation_of_expected_evidence/
For example: “On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs.” -Eliezer_Yudkowsky AND “Your actual probability starts out at 0.5, rises steadily as the clever arguer talks (starting with his very first point, because that excludes the possibility he has 0 points)” -Eliezer_Yudkowsky
Appear to be contradictions, given that each point= a piece of evidence (shininess of box, presence of blue stamp, etc).
The cherry picking problem appears to be similar to the witch trial problem. In the latter any piece of evidence is interpreted to support the conclusion, while in the former evidence is only presented if it supports the conclusion.
You can’t expect your probabilities to on average be increased before seeing/hearing the evidence.
I think only if you do have a large background of knowledge, with a high probability that you are already aware of any given piece of evidence. But if you hear a repeat evidence, it simply shouldn’t alter your probabilities, rather than lower it. I’m having a hard time coming up with a way to properly balance the equation.
The only thing I can think of is if you count the entire argument as one piece of evidence, and use a strategy like suggested by g for updating your priors based on the entire sum?
But you don’t necessarily listen to the entire argument. Knowing about hypothetical cut off points below which they wont spend the time to present and explain evidence means with enough info you could still construct probabilities. If time is limited, can you update with each single piece of evidence based on strength relative to expected?
What if you are unfamiliar with the properties of boxes and how they are related to likelihood of the presence of a diamond? Any guesstimates seem like they’d be well my abilities at least.
Unless I already know a lot, I have a hard time justifying updating my priors at all based on CA’s arguments. If I do know a lot, I still can’t think of a way to justifiably not expect the probability to increase, which is a problem. Help, ideas?
PS. Thankfully not everyone is a clever arguer. Ideally, scientists/teachers teaching you about evolution (for example) will not be selective in giving evidence. The evidence will simply be lopsided because of nature being lopsided in how it produces evidence (entangled with truth). I don’t think one has to actually listen to a creationist, assuming it is known that the scientists/source material the teacher is drawing from are using good practice.
Also, this is my first post here, so if I am ignorant please let me know and direct me to how I can improve!
Someone claiming that they have evidence for a thing is already evidence for a thing, if you trust them at all, so you can update on that, and then revise that update on how good the evidence turns out to be once you actually get it.
For example, say gwern posts to Discussion that he has a new article on his website about some drug, and he says “tl;dr: It’s pretty awesome” but doesn’t give any details, and when you follow the link to the site you get an error and can’t see the page. gwern’s put together a few articles now about drugs, and they’re usually well-researched and impressive, so it’s pretty safe to assume that if he says a drug is awesome, it is, even if that’s the only evidence you have. This is a belief about both the drug (it is particularly effective at what it’s supposed to do) and what you’ll see when you’re able to access the page about it (there will be many citations of research indicating that the drug is particularly effective).
Now, say a couple days later you get the page to load, and what it actually says is “ha ha, April Fools!”. This is new information, and as such it changes your beliefs—in particular, your belief that the drug is any good goes down substantially, and any future cases of gwern posting about an ‘awesome’ drug don’t make you believe as strongly that the drug is good—the chance that it’s good if there is an actual page about it stays about the same, but now you also have to factor in the chance that it’s another prank—or in other words that the evidence you’ll be given will be much worse than is being claimed.
It’s harder to work out an example of evidence turning out to be much stronger than is claimed, but it works on the same principle—knowing that there’s evidence at all means you can update about as much as you would for an average piece of evidence from that source, and then when you learn that the evidence is much better, you update again based on how much better it is.
Not particularly difficult, just posit a person who prior experience has taught you is particularly unreliable about assessing evidence. If they post a link arguing a position you already know they’re in favor of, you should assign a relatively low weight of evidence to the knowledge that they’ve linked to a resource arguing the position, but if you check it out and find that it’s actually well researched and reasoned, then you update upwards.
Thanks for the response.
However, I think you misunderstood what I was attempting to say. I see I didn’t use the term “filtered evidence”, and am wondering if my comment showed up somewhere other than the article “what evidence filtered evidence”: http://lesswrong.com/lw/jt/what_evidence_filtered_evidence/ Explaining how I got a response so quickly when commenting on a 5 year old article! If so, my mistake as my comment was then completely misleading!
When the information does not come from a filtered source, I agree with you. If I find out that there is evidence that will be in the up (or down) direction of a belief, this will modify my priors based on the degree of entanglement between the source and the matter of the belief. After seeing the evidence then the probability assessment will on average remain the same; if it was weaker/stronger it will be lower/higher (or higher/lower, if evidence was downward) but it will of course not pass over the initial position before I heard of the news, unless of course it turns out to be evidence for the opposite direction.
Using drugs instead of boxes if that is an example you prefer: imagine a clever arguer hired by Merck to argue about what a great drug Rofecoxib is. The words “cardiovascular”, “stroke”, and “heart attack” wont ever come up. With the help of selectively drawing from trials, a CA can paint a truly wonderful picture of the drug that has limited baring on reality.
Before seeing his evidence he tells you “Rofecoxib is wonderful!” This shouldn’t modify your belief, as it only tells you he is on Merck’s payroll. Now how do you appropriately modify your belief on the drug’s quality and merits with the introduction of each piece of evidence this clever arguer presents to you?
Actually, it’s my bad—I found your comment via the new-comments list, and didn’t look very closely at its context.
As to your actual question: Being told that someone has evidence of something is, if they’re trustworthy, not just evidence of the thing, but also evidence of what other evidence exists. For example, in my scenario with gwern’s prank, before I’ve seen gwern’s web page, I expect that if I look the mentioned drug up in other places, I’ll also see evidence that it’s awesome. If I actually go look the drug up and find out that it’s no better than placebo in any situation, that’s also surprising new information that changes my beliefs—the same change that seeing gwern’s “April Fools” message would cause, in fact, so when I do see that message, it doesn’t surprise me or change my opinion of the drug.
In your scenario, I trust Merck’s spokesperson much less than I trust gwern, so I don’t end up with nearly so strong of a belief that third parties will agree that the drug is a good one—looking it up and finding out that it has dangerous side effects wouldn’t be surprising, so I should take the chance of that into account to begin with, even if the Merck spokesperson doesn’t mention it. This habit of keeping possible information from third parties (or information that could be discovered in other ways besides talking to third parties, but that the person you’re speaking to wouldn’t tell you even if they’d discovered it) into account when talking to untrustworthy people is the intended lesson of the original post.