MixedNuts’s comment reminded me of a good resource for such techniques, and, indeed, for generally improving one’s effectiveness at reading: How To Read A Book
jscn
It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.
-- Mark Twain
Clearly Dennett has his sources all mixed up.
Solaris by Stanislaw Lem is probably one of my all time favourites.
Anathem by Neal Stephenson is very good.
Voted up mainly for the Greg Egan recommendations.
But the problem is worse than that because “Sometimes, crows caw” actually does allow you to make predictions in the way “electricity!” does not.
The problem is even worse than that, because “Sometimes, crows caw” predicts both the hearing of a caw and the non-hearing of a caw. So it does not explain either (at least, based on the default model of scientific explanation).
If we go with “Crows always caw and only crows caw” (along with your extra premises regarding lungs, sound and ears etc), then we might end up with a different model of explanation, one which takes explanation to be showing that what happened had to happen.
The overall problem you seem to have is that neither of these kinds of explanation gives a causal story for the event (which is a third model for scientific explanations).
(I wrote an essay on these models of scientific explanation earlier in the year for a philosophy of science course which I could potentially edit and post if there’s interest.)
Some good, early papers on explanation (i.e., ones which set the future debate going) are:
The Value of Laws: Explanation and Prediction (by Rudolf Carnap), Two Basic Types of Scientific Explanation, The Thesis of Structural Identity and Inductive-Statistical Explanation (all by Carl Hempel).
Huh, I thought there was a fair bit of evidence around showing that people perform basically just as badly on tests which exploit cognitive biases after being told about them as they do in a state of ignorance.
I found Drive Yourself Sane useful for similar reasons.
I’ve been meaning to take a stab at Korzybski’s Science and Sanity (available on the interwebs, I believe) for a while, but I’ve heard it’s fairly impenetrable.
It’s a wonderful thing to be clever, and you should never think otherwise, and you should never stop being that way. But what you learn, as you get older, is that there are a few million other people in the world all trying to be clever at the same time, and whatever you do with your life will certainly be lost—swallowed up in the ocean—unless you are doing it with like-minded people who will remember your contributions and carry them forward. That is why the world is divided into tribes.
-- Neal Stephenson, The Diamond Age
I neglected to record from which character the quote came.
Rationality is highly correlated intelligence
According to research K.E. Stanovich, this is not the case:
Intelligence tests measure important things, but they do not assess the extent of rational thought. This might not be such a grave omission if intelligence were a strong predictor of rational thinking. But my research group found just the opposite: it is a mild predictor at best, and some rational thinking skills are totally dissociated from intelligence.
See http://www.project-syndicate.org/commentary/stanovich1
The classic example of riding a bicycle comes to mind. No amount of propositional knowledge will allow you to use a bike successfully on the first go. Theory about gyroscopic effects of wheels and so forth all comes to nothing until you hop on and try (and fail, repeatedly) to ride the damn thing.
Conversely, most people never realise the propositional knowledge that in order to steer the bike left, you must turn the handle bars right (at least initially and at high speeds). But they do it unconsciously nonetheless.
But once procedural knowledge is had, it also incorporates things like body memory and pure automatic habit, which, when observed in oneself, are just as likely to be rationalized after the fact as they are to be antecedently planned for sound reasons. It’s also easy to forget the initial propositions about a mastered procedure.
I’ve also noticed this kind of thing in my martial arts training.
For instance, often times high level black belts will be incredibly successful at a particular technique but unable to explain the procedure they use (or at least, they’ll be able to explain the basic procedure but not the specific detail that makes the difference). These details are often things the practitioner has learned unconsciously, and so are not propositional knowledge for them at all. Or they may be propositions taught long ago but forgotten (except in muscle memory).
The difference between a great practitioner and a great teacher is usually the ability to spot the difference that makes a difference.
This tendency can be used for good, though. As long as you’re aware of the weakness, why not take advantage of it? Intentional self-priming, anchoring, rituals of all kinds can be repurposed.
Most of these bad Philosophers were encountered during the few classes I took to get a Philosophy minor.
Initially I thought you were talking about professional Philosophers, not students. This clears that up, but it would be better to refer to them as Philosophy students. Most people wouldn’t call Science undergrads “Scientists”.
My experience with Philosophy has been the opposite. Almost all the original writing we’ve read has been focused on how and why the original authors were wrong, and how modern theories address their errors. Admittedly, I’ve tailored my study to contain more History and Philosophy of Science than is usual, but I’ve found the same to be true of the standard Philosophy classes I’ve taken.
In summary, it probably varies from school to school and I don’t think it’s entirely fair to tar the whole field of Philosophy with the same brush.
I would guess that it’s because comments are shorter and tend to express a single idea. Posts tend to have a series of ideas, which means a voter is less likely to think all of them are good/worthy of an upvote.
Thirded. I completed half of my degree in CS before switching to Philosophy. I’m finding it significantly more stimulating. I don’t think I learned anything in my CS classes that I couldn’t easily have taught myself (and had more fun doing so).
According to this post, doing so would be “against blog guidelines”. The suggested approach is to do top-level book review posts. I haven’t seen any of these yet, though.
That sorted it, thanks.
Having recently received a couple of Amazon gift certificates, I’m looking for recommendations of ‘rationalist’ books to buy. (It’s a little difficult to separate the wheat from the chaff.)
I’m looking mainly for non-fiction that would be helpful on the road to rationality. Anything from general introductory type texts to more technical or math oriented stuff. I found this OB thread which has some recommendations, but I thought that:
this could be a useful thread for beginners (and others) here
the ability to vote on suggestions would provide extra information
So, if you have a book to recommend, please leave a comment. If you have more than one to recommend, make them separate comments so that each can be voted up/down individually.
Nothing terrible will happen to Wednesday if she deconverts
The terrible thing has already happened at this stage. Telling your children that lies are true (i.e., that Mormonism is true), when they have no better way of discerning the truth than simply believing what you say, is abusive and anti-moralistic. It is fundamentally destructive of a person’s ability to cope with reality.
I have never heard a story of deconversion that was painless. Everyone I know who has deconverted from a religious upbringing has undergone large amounts of internal (and often external) anguish. Even after deconverting most have not been capable of severing ties to the destructive people who doomed them to this pain in the first place.
Nietzsche, On Truth and Lie in an Extra-Moral Sense