Let us take the example of Daniel Dennett vs David Chalmers. Dennett calls philosophical zombies an “embarrassment,” while Chalmers continues to double-down on his conclusion that consciousness cannot be explained in purely physical terms. If Chalmers conceded and switched teams, then he is going to be “just another philosopher,” while Dennett achieves an academic victory.
As an aspiring world-class philosopher, you have little incentive to adopt the dominant view because if you do you will become just another ordinary philosopher. By adopting a radically different stance, you establish an entirely new “school” and become at its helm. Meanwhile, it would be considerably more effortful to become at the helm of the more well-established schools, e.g. physicalism and compatibilism.
Thus, motivated skepticism and motivated reasoning seem to me to be completely unavoidable in academia.
It surely is an incentive structure problem. However, I am uncertain about to what extend incentive structures can be “designed”. They seem to come about as a result of thousands of years of culture gene coevolution.
Peer reviews have a similar incentive structure misalignment. Why would you spend a month reviewing someone else’s paper when you can write your own instead? This point was made by Scott Aaronson during one of his AMAs but he didn’t attempt at offering a solution.
Do we need more academics that agree with the status quo? If you reframe your point as “academia selects for originality,” it wouldn’t seem such a bad thing. Research requires applied creativity: creating new ideas that are practically useful. A researcher who concludes that the existing solution to a problem is the best is only marginally useful.
The debate between Chalmers and Dennett is practically useful, because it lays out the boundaries of the dispute and explores both sides of the argument. Chalmers is naturally more of a contrarian and Dennett more of a small c conservative; people fit into these natural categories without too much motivation from institution incentives.
The creative process can be split into idea generation and idea evaluation. Some people are good at generating wacky, out-there ideas, and others are better at judging the quality of said ideas. As De Bono has argued, it’s best for there to be some hygiene between the two due to the different kinds of processing required. I think there’s a family resemblence here with exploration-explotation trade-offs in ML.
TL;DR I don’t think that incentives are the only constraint faced by academia. It’s also difficult for individual people to be the generators and evaluators of their own ideas, and both processes are necessary.
Do rational communities undervalue idea generation because of their focus on rational judgement?
You make excellent points. The growth of knowledge is ultimately a process of creativity alternating with criticism and I agree with you that idea generation is under appreciated. Outlandish ideas are met with ridicule most of the time.
This passage from Quantum Computing Since Democritus by Scott Aaronson captures this so well:
[I have changed my attitudes towards] the arguments of John Searle and Roger Penrose against “strong artificial intelligence.” I still think Searle and Penrose are wrong on crucial points, Searle more so than Penrose. But on rereading my 2006 arguments for why they were wrong, I found myself wincing at the semi-flippant tone, at my eagerness to laugh at these celebrated scholars tying themselves into logical pretzels in quixotic, obviously doomed attempts to defend human specialness. In effect, I was lazily relying on the fact that everyone in the room already agreed with me – that to these (mostly) physics and computer science graduate students, it was simply self-evident that the human brain is nothing other than a “hot, wet Turing machine,” and weird that I would even waste the class’s time with such a settled question. Since then, I think I’ve come to a better appreciation of the immense difficulty of these issues – and in particular, of the need to offer arguments that engage people with different philosophical starting-points than one’s own.
I think we need to strike a balance between the veracity of ideas and tolerance of their outlandishness. This topic has always fascinated me but I don’t know of a concrete criterion for effective hypothesis generation. The simplicity criterion of Occam’s Razor is ok but it is not the be-all end-all.
Is bias within academia ever actually avoidable?
Let us take the example of Daniel Dennett vs David Chalmers. Dennett calls philosophical zombies an “embarrassment,” while Chalmers continues to double-down on his conclusion that consciousness cannot be explained in purely physical terms. If Chalmers conceded and switched teams, then he is going to be “just another philosopher,” while Dennett achieves an academic victory.
As an aspiring world-class philosopher, you have little incentive to adopt the dominant view because if you do you will become just another ordinary philosopher. By adopting a radically different stance, you establish an entirely new “school” and become at its helm. Meanwhile, it would be considerably more effortful to become at the helm of the more well-established schools, e.g. physicalism and compatibilism.
Thus, motivated skepticism and motivated reasoning seem to me to be completely unavoidable in academia.
Are you sure that’s an argument for it being completely unavoidable, or just an argument that our current incentive structures are not very good?
It surely is an incentive structure problem. However, I am uncertain about to what extend incentive structures can be “designed”. They seem to come about as a result of thousands of years of culture gene coevolution.
Peer reviews have a similar incentive structure misalignment. Why would you spend a month reviewing someone else’s paper when you can write your own instead? This point was made by Scott Aaronson during one of his AMAs but he didn’t attempt at offering a solution.
Do we need more academics that agree with the status quo? If you reframe your point as “academia selects for originality,” it wouldn’t seem such a bad thing. Research requires applied creativity: creating new ideas that are practically useful. A researcher who concludes that the existing solution to a problem is the best is only marginally useful.
The debate between Chalmers and Dennett is practically useful, because it lays out the boundaries of the dispute and explores both sides of the argument. Chalmers is naturally more of a contrarian and Dennett more of a small c conservative; people fit into these natural categories without too much motivation from institution incentives.
The creative process can be split into idea generation and idea evaluation. Some people are good at generating wacky, out-there ideas, and others are better at judging the quality of said ideas. As De Bono has argued, it’s best for there to be some hygiene between the two due to the different kinds of processing required. I think there’s a family resemblence here with exploration-explotation trade-offs in ML.
TL;DR I don’t think that incentives are the only constraint faced by academia. It’s also difficult for individual people to be the generators and evaluators of their own ideas, and both processes are necessary.
Do rational communities undervalue idea generation because of their focus on rational judgement?
You make excellent points. The growth of knowledge is ultimately a process of creativity alternating with criticism and I agree with you that idea generation is under appreciated. Outlandish ideas are met with ridicule most of the time.
This passage from Quantum Computing Since Democritus by Scott Aaronson captures this so well:
I think we need to strike a balance between the veracity of ideas and tolerance of their outlandishness. This topic has always fascinated me but I don’t know of a concrete criterion for effective hypothesis generation. The simplicity criterion of Occam’s Razor is ok but it is not the be-all end-all.