The point is not that scientists should be perfect in all spheres of human endeavor. But neither should anyone who really understands science, deliberately start believing things without evidence. It’s not a moral question, merely a gross and indefensible error of cognition. It’s the equivalent of being trained to say that 2 + 2 = 4 on math tests, but when it comes time to add up a pile of candy bars you decide that 2 + 2 ought to equal 5 because you want 5 candy bars. You may do well on math tests, when you apply the rules that have been trained into you, but you don’t understand numbers. Similarly, if you deliberately believe without evidence, you don’t understand cognition or probability theory. You may understand quarks, or cells, but not science.
Newton may have been a hotshot physicist by the standards of the 17th century, but he wasn’t a hotshot rationalist by the standards of this one. (Laplace, on the other hand, was explicitly a probability theorist as well as a physicist, and he was an outstanding rationalist by the standards of that era.)
What makes you think it’s a deliberate act to start believing things without evidence?
What if it’s somewhere along a spectrum of time required to make a rational decision.
On x-axis on the far left we’ve got no time, on the far right we’ve got all the time of our lives.
On the y-axis we’ve got the effectiveness of decision making, the higher it is the better the performance.
Looks like the Yerkes-Dodson inverted “U” relationship.
If we spend very little time on making the decision, then it’s likely an ineffective decision.
If we spend heaps of time making the decision, then it’s possible the decision is over analysed and could well be a less effective decision than one where we’ve spent some optimum amount of time making the decision.
How much time could we spend on deciding to eat an apple? We could just grab it off the shelf and eat it—that might be ok, or it could result in us taking a bite of an rotten apple.
We could examine the apple for rottenness, we could examine the shop for their overall health standards, we could trace the journey of the apple back through the transport system, all the way back onto the tree, we could do a soil and pest analysis of the environment the apple grew in—this is probably over analysis.
Instead we could have an optimum decision with only 30 seconds of observing the apple, squeezed it and it didn’t squish, looked over it’s surface and there are no obvious holes or yucky markings.
The scientist does increase their time spent on making a decision within their field, they believe that their optimum amount of decision making process is moved to the right in the aforementioned graph, because that’s their field, their job and reputation. When they turn off their “work” processes they will move back to the left on the graph. Are they now being irrational, or have they simply acknowledged that their optimum decision making no longer needs to be so strict.
How much evidence is required to decide that the apple is safe?
What standard is reasonable for deciding to believe in something, and is context relevant to that standard?
The point is not that scientists should be perfect in all spheres of human endeavor. But neither should anyone who really understands science, deliberately start believing things without evidence. It’s not a moral question, merely a gross and indefensible error of cognition. It’s the equivalent of being trained to say that 2 + 2 = 4 on math tests, but when it comes time to add up a pile of candy bars you decide that 2 + 2 ought to equal 5 because you want 5 candy bars. You may do well on math tests, when you apply the rules that have been trained into you, but you don’t understand numbers. Similarly, if you deliberately believe without evidence, you don’t understand cognition or probability theory. You may understand quarks, or cells, but not science.
Newton may have been a hotshot physicist by the standards of the 17th century, but he wasn’t a hotshot rationalist by the standards of this one. (Laplace, on the other hand, was explicitly a probability theorist as well as a physicist, and he was an outstanding rationalist by the standards of that era.)
What makes you think it’s a deliberate act to start believing things without evidence?
What if it’s somewhere along a spectrum of time required to make a rational decision. On x-axis on the far left we’ve got no time, on the far right we’ve got all the time of our lives. On the y-axis we’ve got the effectiveness of decision making, the higher it is the better the performance. Looks like the Yerkes-Dodson inverted “U” relationship.
If we spend very little time on making the decision, then it’s likely an ineffective decision. If we spend heaps of time making the decision, then it’s possible the decision is over analysed and could well be a less effective decision than one where we’ve spent some optimum amount of time making the decision.
How much time could we spend on deciding to eat an apple? We could just grab it off the shelf and eat it—that might be ok, or it could result in us taking a bite of an rotten apple.
We could examine the apple for rottenness, we could examine the shop for their overall health standards, we could trace the journey of the apple back through the transport system, all the way back onto the tree, we could do a soil and pest analysis of the environment the apple grew in—this is probably over analysis.
Instead we could have an optimum decision with only 30 seconds of observing the apple, squeezed it and it didn’t squish, looked over it’s surface and there are no obvious holes or yucky markings.
The scientist does increase their time spent on making a decision within their field, they believe that their optimum amount of decision making process is moved to the right in the aforementioned graph, because that’s their field, their job and reputation. When they turn off their “work” processes they will move back to the left on the graph. Are they now being irrational, or have they simply acknowledged that their optimum decision making no longer needs to be so strict.
How much evidence is required to decide that the apple is safe?
What standard is reasonable for deciding to believe in something, and is context relevant to that standard?