It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.