Yes, this is all evidence according to the Bayesian definition. Calling E evidence (with respect to prior knowledge X) for a proposition H just means that p(H | E & X) > p(H | X). That is why quantifying evidence is so important. Just how much evidence is it? If all the evidence you offer raises the probability of H by only a few percent from a very low prior, then it should have practically no effect on how we treat Knox.
I basically agree, but sometimes it’s helpful to have a story or narrative or illustration before jumping in to look at the important evidence. That’s just how the human mind works, or at least most peoples’ minds. I realize this can be dangerous, for example it can lead to a “conjunction fallacy,” but I was careful to lable my narrative as speculation.
Just today I was browsing this web site and I came across an article called “Existential Risk” which was complete with (1) a picture of the Earth; (2) a likely apocryphal story about a man who singlehandedly prevented nuclear war; and (3) a picture of a Stanford torus. Is this cheap emotional manipulation? Perhaps, but again, I think this kind of story-telling can be useful to get the mind ready to focus on the meat of the argument.
One can ask what the likelihood is that we are reaching a critical juncture where the decisions and dilligence of just a few humans in the artificial intelligence community will have a massive impact on the future of humanity. Strictly speaking, the fact that some Russian dude did (or didn’t) singlehandedly prevent a nuclear war shouldn’t have much impact on our estimate of this probability. But I think it still might be worth mentioning to demonstrate the plausibility of the claim that one person can have a big impact.
In the same way, I think it’s worth mentioning the Janet Chandler case from the 70s. But again, if you object to this approach, just ignore paragraphs 2 through 7 of my blog post.
In the same way, I think it’s worth mentioning the Janet Chandler case from the 70s. But again, if you object to this approach, just ignore paragraphs 2 through 7 of my blog post.
I agree that it’s worth mentioning Janet Chandler. It would be better to treat it seriously as evidence, rather than merely as a narrative framing device. To treat it seriously as evidence, you should use it to help establish a prior probability for Knox’s guilt (like Desrtopa did).
It would be better to treat it seriously as evidence, rather than merely as a narrative framing device. To treat it seriously as evidence, you should use it to help establish a prior probability for Knox’s guilt
It seems to me that “narrative framing device” is basically a poor man’s method of estimating a prior probability. Here’s what I said in my blog post:
The point is that there are levels of extraordinary. Claiming that Knox participated in her roommate’s murder is not like claiming that the president is actually an extra-terrestrial from Mars.
Of course in terms of assessing probabilities, it might be better if there were a lot of precedents, for example in a situation where a husband is suspected of killing his wife. But here there’s not a lot to go on.
I basically agree, but sometimes it’s helpful to have a story or narrative or illustration before jumping in to look at the important evidence. That’s just how the human mind works, or at least most peoples’ minds. I realize this can be dangerous, for example it can lead to a “conjunction fallacy,” but I was careful to lable my narrative as speculation.
Just today I was browsing this web site and I came across an article called “Existential Risk” which was complete with (1) a picture of the Earth; (2) a likely apocryphal story about a man who singlehandedly prevented nuclear war; and (3) a picture of a Stanford torus. Is this cheap emotional manipulation? Perhaps, but again, I think this kind of story-telling can be useful to get the mind ready to focus on the meat of the argument.
One can ask what the likelihood is that we are reaching a critical juncture where the decisions and dilligence of just a few humans in the artificial intelligence community will have a massive impact on the future of humanity. Strictly speaking, the fact that some Russian dude did (or didn’t) singlehandedly prevent a nuclear war shouldn’t have much impact on our estimate of this probability. But I think it still might be worth mentioning to demonstrate the plausibility of the claim that one person can have a big impact.
In the same way, I think it’s worth mentioning the Janet Chandler case from the 70s. But again, if you object to this approach, just ignore paragraphs 2 through 7 of my blog post.
I agree that it’s worth mentioning Janet Chandler. It would be better to treat it seriously as evidence, rather than merely as a narrative framing device. To treat it seriously as evidence, you should use it to help establish a prior probability for Knox’s guilt (like Desrtopa did).
It seems to me that “narrative framing device” is basically a poor man’s method of estimating a prior probability. Here’s what I said in my blog post:
Of course in terms of assessing probabilities, it might be better if there were a lot of precedents, for example in a situation where a husband is suspected of killing his wife. But here there’s not a lot to go on.