Philosophy that can be “taken seriously by computer scientists”
I’ve long held CMU’s philosophy department in high regard. One of their leading lights, Clark Glymour, recently published a short manifesto, which Brian Leiter summed up as saying that “the measure of value for philosophy departments is whether they are taken seriously by computer scientists.”
Selected quote from Glymour’s manifesto:
Were I a university administrator facing a contracting budget, I would not look to eliminate biosciences or computer engineering. I would notice that the philosophers seem smart, but their writings are tediously incestuous and of no influence except among themselves, and I would conclude that my academy could do without such a department… But not if I found that my philosophy department retrieved a million dollars a year in grants and fellowships, and contained members whose work is cited and used in multiple subjects, and whose faculty taught the traditional subject well to the university’s undergraduates.
Also see the critique here, but I’d like to have Glymour working on FAI.
Full disclosures below. *
I agree with much of Glymour’s manifesto, but I think the passage quoted would have been better left on the cutting-room floor. One reason is given in the critique you link: lots of philosophy gets grants and citations and employment in diverse areas around the academy and elsewhere. Not all of it gets noticed in science or furthers a scientific project, even broadly construed. For example, John Hawthorne just won a multi-million dollar grant to do work in epistemology of religion, and a couple of years ago, Alfred Mele won a multi-million dollar grant to do more work on free will. I doubt that Glymour thinks either of these projects has the virtues of the work of his CMU colleagues. But by the “grant-winning” standard, administrators should love this sort of philosophy. By a sales or readership standard, administrators ought to be encouraging more pop-culture and philosophy schlock.
Another reason is given by Glymour in the same manifesto:
So, a good use for philosophy departments is to shelter iconoclastic thinkers who are not going to be either understood or appreciated by contemporary scientists. How are such people going to be successful grant-winners? I can see how they might successfully publish within philosophy, given a certain let-every-flower-bloom attitude in philosophy. And I can see how some philosophers might end up convincing some scientists to take their work seriously enough to fund it … eventually. But surely, some of Glymour’s iconoclasts will be missed or ignored in the grant-giving process. Better, I think, to have some places for people to think whatever they want to think and be supported in that thinking so that they do not have to panic about meeting the basic necessities of life. If that means having to put up with literary criticism, then so be it.
Disclosures. I did my dissertation under Peter Spirtes, and I’ve taken many enjoyable classes with Clark Glymour. I think Clark is an excellent person, and he is one of my philosophical heroes, although I don’t think I do a very good job of emulating him.
The manifesto has a nice paragraph where Glymour lists the contributions of many mathematical philosophers. This might be relevant to UDT:
Yup. This is why I was so surprised in January 2011 that Less Wrong had never before mentioned formal philosophy, which is the branch of philosophy most relevant to the open research problems of Friendly AI. See, for example, Self-Reference and the Acyclity of Rational Choice or Reasoning with Bounded Resources and Assigning Probabilities to Arithmetical Statements.
Thanks for the links. I just read those two papers and they don’t seem to be saying anything new to me :-(
In your linked piece, you were talking about formal epistemology. Here you say “formal philosophy.” Is that a typo, or do you think that formal epistemology exhausts formal philosophy? (I would hope not the latter, since lots of formal work gets done in philosophy outside epistemology!)
Formal epistemology is a subfield within formal philosophy, probably the largest.
Larger than logic? Hmm … maybe you’re thinking about “formal philosophy” in a way that I am unfamiliar with.
This is pretty much unrelated but do you think maybe you could write a short post about the relevance of algorithmic probability for human rationality? There’s this really common error ’round these parts where people say a hypothesis (e.g. God, psi, etc) is a prior unlikely because it is a “complex” hypothesis according to the universal prior. Obviously the “universal prior” says no such thing, people are just taking whatever cached category of hypotheses they think are more probable for other unmentioned reasons and then labeling that category “simple”, which might have to do with coding theory but has nothing to do with algorithmic probability. Considering this appeal to simplicity is one of the most common attempted argument stoppers it might benefit the local sanity waterline to discourage this error. Fewer “priors”, more evidence.
ETA: I feel obliged to say that though algorithmic probability isn’t that useful for describing humans’ epistemic states, it’s very useful for talking about FAI ideas; it’s basically a tool for transforming indexical information about observations into logical information about programs and also proofs thanks to the Curry—Howard isomorphism, which is pretty cool, among other reasons it’s cool.
I already have a post about that. Unfortunately I screwed up the terminology and was rightly called on it, but the point of the post is still valid.
Thanks. I actually found your amendment more enlightening. Props again for your focus on the technical aspects of rationality, stuff like that is the saving grace of LW.
As Luke recently pointed out,
But many verbal restatements of verbal problems often, even typically, precede and facilitate the construction of this golden mathematical trophy. These portions of philosophy, which are the bulk of it, might easily fail to impress the computer scientists. But without them, progress in formal philosophy would be slower.
For those interested, the CMU philosophy department organizes an annual summer school in logic and formal epistemology.
Its interesting why some of the humanities—and particulary areas of philosophy -- are constantly defending their research program or the value of the discipline as a whole. Aparently, the folks of other segments of academia want see something useful. But it’s not so sad, in some cases the dialog can happen, for example, in formal epistemology, the tentative to mix Bayesianism with conceptual analysis, trying to formalize concepts like ‘coherence’.
I would roughly generalize to “scientists”. There is the need of people armed with both the tools of philosophy and science to discuss the meaning of many discoveries of the 20th/21st century: usually scientists are too narrowly focused and philosopher are not sufficiently well prepared. Nice to know that there are some exceptions (trusting you on this, I till have to go through the links).
My upcoming book, 1-Page-Classics gives examples of a kind of “reduced” Bayesianism in the form of a one-pager called “Traditional Claims” and another called “Modal Realism.”
The book might also be interesting for virtue ethics, in the form of abbreviations of the famous scroll “The Mandate of Heaven,” Confucius’ “Analects or Analectus,” and Lao Tzu’s “Tao Te Ching.”
I also abbreviate Epictetus’ “Enchiridion” in a creative fashion, and “Republic of Plato” includes an excellent form of sophist criticism to that project (poetry, the ring of Giges, etc.).