Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being “convinced” should cause you to act like the claim in question has probability 1. Thus, one shouldn’t say one is “convinced” unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
Not at all. In fact I pointed out that my account of being “convinced” is continuous with Pascal’s Wager, and Pascal argued in favor of believing on the basis of close to zero probability. As the Stanford Encyclopedia introduces the wager:
“Pascal’s Wager” is the name given to an argument due to Blaise Pascal for believing, or for at least taking steps to believe, in God.
Everyone is familiar with it of course. I only quote the Stanford to point out that it was in fact about “believing”. And of course nobody gets into heaven without believing. So Pascal wasn’t talking about merely making a bet without an accompanying belief. He was talking about, must have been talking about, belief, must have been saying you should believe in God even though there is no evidence of God.
I would hesitantly suggest that for most questions if one can’t conceive easily of what such evidence would look like then one probably hasn’t thought much about the matter.
The issue is two-fold: whether mathematicians are less interested in elementary proofs than before, and if they are, why. So, how would you go about checking to see whether mathematicians are less interested in elementary proofs? What if they do fewer elementary proofs? But it might be because there aren’t elementary proofs to do. So you would need to deal with that possibility. How would you do that? Would you survey mathematicians? But the survey would give little confidence to someone who suspect mathematicians of being less interested.
As part of the reason “why”, one possible answer is, “because elementary proofs aren’t that important, really.” I mean, it might be the right thing. How would I know whether it was the right thing? I’m not sure. I’m not sure that it’s not a matter of preference. Well, maybe elementary proofs have a better track record of not ultimately being overturned. How would we check that? Sounds hard.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it?
Well, as I recall, his actual claim was that liberalism causes mathematicians to evade accountability, and part of that evasion is abandoning the search for elementary proofs. So one question to ask is whether liberalism causes a person to evade accountability. There is a lot about liberalism that can arguably be connected to evasion of personal accountability. The specific question is whether liberalism would cause mathematicians to evade mathematical accountability—that is, accountability in accordance with traditional standards of mathematics. If so, this would be part of a more general tendency of liberal academics, liberal thinkers, to seek to avoid personal accountability.
In order to answer this I really think we need to come up with an account of what, exactly, liberalism is. A lot of people have put a lot of work into coming up with an account of what liberalism is, and each person comes up with a different account. For example there is Thomas Sowell’s account of liberals in his Conflict of Visions.
What, exactly, liberalism is, would greatly affect the answer to the question of whether liberalism accounts for the avoidance (if it exists) of personal accountability.
I will go ahead and give you just one, highly speculative, account of liberalism and its effect on academia. Here goes. Liberalism is the ideology of a certain class of people, and the ideology grows in part out of the class. We can think of it as a religion, which is somewhat adapted to the people it occurs in, just as Islam is (presumably) somewhat adapted to the Middle East, and so on. Among other things, liberalism extols bureaucracy, such as by preferring regulation of the marketplace, which is rule by bureaucrats over the economy. This is in part connected to the fact that liberalism is the ideology of bureaucrats. However, internally, bureaucracy grows in accordance with a logic that is connected to the evasion of personal responsibility by bureaucrats. If somebody does something foolish and gets smacked for it, the bureaucratic response is to establish strict rules to which all must adhere. Now the next time something foolish is done, the person can say, “I’m following the rules”, which he is. It is the rules which are foolish. But the rules aren’t any person. They can’t be smacked. Voila—evasion of personal responsibility. This is just one tiny example.
So, to recap, liberalism is the ideology of bureaucracy, and extols bureaucracy, and bureaucracy is in no small part built around the ideal of the avoidance of personal responsibility. One is, of course, still accountable in some way—but the nature of the accountability is radically different. One is now accountable for following the intricate rules of the bureaucracy to the letter. One is not personally accountable for the real-world disasters that are produced by bureaucracy which has gone on too long.
The liberal mindset, then, is the bureaucratic mindset, and the bureaucratic mindset revolves around the evasion of personal accountability, at least has a strong element of evasion.
Now we get to the universities. The public universities are already part of the state. The professors work for the state. They are bureaucratized. What about private universities? They are also largely connected with the state, especially insofar as professors get grants from the state. Long story short, academic science has turned into a vast bureaucracy, scientists have turned into bureaucrats. Scientific method has been replaced by such things as “peer review”, which is a highly bureaucratized review by anonymous (and therefore unaccountable) peers. Except that the peers are accountable—though not to the truth. They are accountable to each other and to the writers they are reviewing, much as individual departments within a vast bureaucracy are filled with people who are accountable—to each other. What we get is massive amounts of groupthink, echo chamber, nobody wanting to rock the boat, same as we get in bureaucracy.
So now we get to mathematicians.
Within a bureaucracy, your position is safe and your work is easy. There are rules, probably intricate rules, but as long as you follow the rules, and as long as you’re a team player, you can survive. You don’t actually have to produce anything valuable. The rules are originally intended to guide the production of valuable goods, but in the end, just as industries capture their regulatory authority, so do bureaucrats capture the rules they work under. So they push a lot of paper but accomplish nothing.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
And in fact this is what we see. So the theory is confirmed! Not so fast—I already knew about the academic paper situation, so maybe I concocted a theory that was consistent with this.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.
This is a good start, but on Conservapedia “liberal” and “liberalism” are pretty much local jargon and their meanings have departed the normative usages in the real world. It is not overstating the case to say that Schlafly uses “liberal” to mean pretty much anything he doesn’t like.
Not at all. In fact I pointed out that my account of being “convinced” is continuous with Pascal’s Wager, and Pascal argued in favor of believing on the basis of close to zero probability. As the Stanford Encyclopedia introduces the wager:
Everyone is familiar with it of course. I only quote the Stanford to point out that it was in fact about “believing”. And of course nobody gets into heaven without believing. So Pascal wasn’t talking about merely making a bet without an accompanying belief. He was talking about, must have been talking about, belief, must have been saying you should believe in God even though there is no evidence of God.
The issue is two-fold: whether mathematicians are less interested in elementary proofs than before, and if they are, why. So, how would you go about checking to see whether mathematicians are less interested in elementary proofs? What if they do fewer elementary proofs? But it might be because there aren’t elementary proofs to do. So you would need to deal with that possibility. How would you do that? Would you survey mathematicians? But the survey would give little confidence to someone who suspect mathematicians of being less interested.
As part of the reason “why”, one possible answer is, “because elementary proofs aren’t that important, really.” I mean, it might be the right thing. How would I know whether it was the right thing? I’m not sure. I’m not sure that it’s not a matter of preference. Well, maybe elementary proofs have a better track record of not ultimately being overturned. How would we check that? Sounds hard.
Well, as I recall, his actual claim was that liberalism causes mathematicians to evade accountability, and part of that evasion is abandoning the search for elementary proofs. So one question to ask is whether liberalism causes a person to evade accountability. There is a lot about liberalism that can arguably be connected to evasion of personal accountability. The specific question is whether liberalism would cause mathematicians to evade mathematical accountability—that is, accountability in accordance with traditional standards of mathematics. If so, this would be part of a more general tendency of liberal academics, liberal thinkers, to seek to avoid personal accountability.
In order to answer this I really think we need to come up with an account of what, exactly, liberalism is. A lot of people have put a lot of work into coming up with an account of what liberalism is, and each person comes up with a different account. For example there is Thomas Sowell’s account of liberals in his Conflict of Visions.
What, exactly, liberalism is, would greatly affect the answer to the question of whether liberalism accounts for the avoidance (if it exists) of personal accountability.
I will go ahead and give you just one, highly speculative, account of liberalism and its effect on academia. Here goes. Liberalism is the ideology of a certain class of people, and the ideology grows in part out of the class. We can think of it as a religion, which is somewhat adapted to the people it occurs in, just as Islam is (presumably) somewhat adapted to the Middle East, and so on. Among other things, liberalism extols bureaucracy, such as by preferring regulation of the marketplace, which is rule by bureaucrats over the economy. This is in part connected to the fact that liberalism is the ideology of bureaucrats. However, internally, bureaucracy grows in accordance with a logic that is connected to the evasion of personal responsibility by bureaucrats. If somebody does something foolish and gets smacked for it, the bureaucratic response is to establish strict rules to which all must adhere. Now the next time something foolish is done, the person can say, “I’m following the rules”, which he is. It is the rules which are foolish. But the rules aren’t any person. They can’t be smacked. Voila—evasion of personal responsibility. This is just one tiny example.
So, to recap, liberalism is the ideology of bureaucracy, and extols bureaucracy, and bureaucracy is in no small part built around the ideal of the avoidance of personal responsibility. One is, of course, still accountable in some way—but the nature of the accountability is radically different. One is now accountable for following the intricate rules of the bureaucracy to the letter. One is not personally accountable for the real-world disasters that are produced by bureaucracy which has gone on too long.
The liberal mindset, then, is the bureaucratic mindset, and the bureaucratic mindset revolves around the evasion of personal accountability, at least has a strong element of evasion.
Now we get to the universities. The public universities are already part of the state. The professors work for the state. They are bureaucratized. What about private universities? They are also largely connected with the state, especially insofar as professors get grants from the state. Long story short, academic science has turned into a vast bureaucracy, scientists have turned into bureaucrats. Scientific method has been replaced by such things as “peer review”, which is a highly bureaucratized review by anonymous (and therefore unaccountable) peers. Except that the peers are accountable—though not to the truth. They are accountable to each other and to the writers they are reviewing, much as individual departments within a vast bureaucracy are filled with people who are accountable—to each other. What we get is massive amounts of groupthink, echo chamber, nobody wanting to rock the boat, same as we get in bureaucracy.
So now we get to mathematicians.
Within a bureaucracy, your position is safe and your work is easy. There are rules, probably intricate rules, but as long as you follow the rules, and as long as you’re a team player, you can survive. You don’t actually have to produce anything valuable. The rules are originally intended to guide the production of valuable goods, but in the end, just as industries capture their regulatory authority, so do bureaucrats capture the rules they work under. So they push a lot of paper but accomplish nothing.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
And in fact this is what we see. So the theory is confirmed! Not so fast—I already knew about the academic paper situation, so maybe I concocted a theory that was consistent with this.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.
This is a good start, but on Conservapedia “liberal” and “liberalism” are pretty much local jargon and their meanings have departed the normative usages in the real world. It is not overstating the case to say that Schlafly uses “liberal” to mean pretty much anything he doesn’t like.