Less Wrong Rationality and Mainstream Philosophy
Part of the sequence: Rationality and Philosophy
Despite Yudkowsky’s distaste for mainstream philosophy, Less Wrong is largely a philosophy blog. Major topics include epistemology, philosophy of language, free will, metaphysics, metaethics, normative ethics, machine ethics, axiology, philosophy of mind, and more.
Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century. That movement is sometimes called “Quinean naturalism” after Harvard’s W.V. Quine, who articulated the Less Wrong approach to philosophy in the 1960s. Quine was one of the most influential philosophers of the last 200 years, so I’m not talking about an obscure movement in philosophy.
Let us survey the connections. Quine thought that philosophy was continuous with science—and where it wasn’t, it was bad philosophy. He embraced empiricism and reductionism. He rejected the notion of libertarian free will. He regarded postmodernism as sophistry. Like Wittgenstein and Yudkowsky, Quine didn’t try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. He dismissed endless semantic arguments about the meaning of vague terms like knowledge. He rejected a priori knowledge. He rejected the notion of privileged philosophical insight: knowledge comes from ordinary knowledge, as best refined by science. Eliezer once said that philosophy should be about cognitive science, and Quine would agree. Quine famously wrote:
The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology?
But isn’t this using science to justify science? Isn’t that circular? Not quite, say Quine and Yudkowsky. It is merely “reflecting on your mind’s degree of trustworthiness, using your current mind as opposed to something else.” Luckily, the brain is the lens that sees its flaws. And thus, says Quine:
Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.
Yudkowsky once wrote, “If there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it.”
When I read that I thought: What? That’s Quinean naturalism! That’s Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!
Non-Quinean philosophy
But I should also mention that LW philosophy / Quinean naturalism is not the largest strain of mainstream philosophy. Most philosophy is still done in relative ignorance (or ignoring) of cognitive science. Consider the preface to Rethinking Intuition:
Perhaps more than any other intellectual discipline, philosophical inquiry is driven by intuitive judgments, that is, by what “we would say” or by what seems true to the inquirer. For most of philosophical theorizing and debate, intuitions serve as something like a source of evidence that can be used to defend or attack particular philosophical positions.
One clear example of this is a traditional philosophical enterprise commonly known as conceptual analysis. Anyone familiar with Plato’s dialogues knows how this type of inquiry is conducted. We see Socrates encounter someone who claims to have figured out the true essence of some abstract notion… the person puts forward a definition or analysis of the notion in the form of necessary and sufficient conditions that are thought to capture all and only instances of the concept in question. Socrates then refutes his interlocutor’s definition of the concept by pointing out various counterexamples...
For example, in Book I of the Republic, when Cephalus defines justice in a way that requires the returning of property and total honesty, Socrates responds by pointing out that it would be unjust to return weapons to a person who had gone mad or to tell the whole truth to such a person. What is the status of these claims that certain behaviors would be unjust in the circumstances described? Socrates does not argue for them in any way. They seem to be no more than spontaneous judgments representing “common sense” or “what we would say.” So it would seem that the proposed analysis is rejected because it fails to capture our intuitive judgments about the nature of justice.
After a proposed analysis or definition is overturned by an intuitive counterexample, the idea is to revise or replace the analysis with one that is not subject to the counterexample. Counterexamples to the new analysis are sought, the analysis revised if any counterexamples are found, and so on...
Refutations by intuitive counterexamples figure as prominently in today’s philosophical journals as they did in Plato’s dialogues...
...philosophers have continued to rely heavily upon intuitive judgments in pretty much the way they always have. And they continue to use them in the absence of any well articulated, generally accepted account of intuitive judgment—in particular, an account that establishes their epistemic credentials.
However, what appear to be serious new challenges to the way intuitions are employed have recently emerged from an unexpected quarter—empirical research in cognitive psychology.
With respect to the tradition of seeking definitions or conceptual analyses that are immune to counterexample, the challenge is based on the work of psychologists studying the nature of concepts and categorization of judgments. (See, e.g., Rosch 1978; Rosch and Mervis 1975; Rips 1975; Smith and Medin 1981). Psychologists working in this area have been pushed to abandon the view that we represent concepts with simple sets of necessary and sufficient conditions. The data seem to show that, except for some mathematical and geometrical concepts, it is not possible to use simple sets of conditions to capture the intuitive judgments people make regarding what falls under a given concept...
With regard to the use of intuitive judgments exemplified by reflective equilibrium, the challenge from cognitive psychology stems primarily from studies of inference strategies and belief revision. (See, e.g., Nisbett and Ross 1980; Kahneman, Slovic, and Tversky 1982.) Numerous studies of the patterns of inductive inference people use and judge to be intuitively plausible have revealed that people are prone to commit various fallacies. Moreover, they continue to find these fallacious patterns of reasoning to be intuitively acceptable upon reflection… Similarly, studies of the “intuitive” heuristics ordinary people accept reveal various gross departures from empirically correct principles...
There is a growing consensus among philosophers that there is a serious and fundamental problem here that needs to be addressed. In fact, we do not think it is an overstatement to say that Western analytic philosophy is, in many respects, undergoing a crisis where there is considerable urgency and anxiety regarding the status of intuitive analysis.
Conclusion
So Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science—a movement that has been active for at least two decades. Moreover, Less Wrong-style philosophy has its roots in Quinean naturalism from fifty years ago.
And I haven’t even covered all the work in formal epistemology toward (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory.
So: Rationalists need not dismiss or avoid philosophy.
Update: To be clear, though, I don’t recommend reading Quine. Most people should not spend their time reading even Quinean philosophy; learning statistics and AI and cognitive science will be far more useful. All I’m saying is that mainstream philosophy, especially Quinean philosophy, does make some useful contributions. I’ve listed more than 20 of mainstream philosophy’s useful contributions here, including several instances of classic LW dissolution-to-algorithm.
But maybe it’s a testament to the epistemic utility of Less Wrong-ian rationality training and thinking like an AI researcher that Less Wrong got so many things right without much interaction with Quinean naturalism. As Daniel Dennett (2006) said, “AI makes philosophy honest.”
Next post: Philosophy: A Diseased Discipline
References
Dennett (2006). Computers as Prostheses for the Imagination. Talk presented at the International Computers and Philosophy Conference, Laval, France, May 3, 2006.
Kahneman, Slovic, & Tversky (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press.
Nisbett and Ross (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Prentice-Hall.
Rips (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Behavior, 12: 1-20.
Rosch (1978). Principles of categorization. In Rosch & Lloyd (eds.), Cognition and Categorization (pp. 27-48). Lawrence Erlbaum Associates.
Rosch & Mervis (1975). Family resemblances: studies in the internal structure of categories. Cognitive Psychology, 8: 382-439.
Smith & Medin (1981). Concepts and Categories. MIT Press.
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 202 points) (
- Philosophy: A Diseased Discipline by 28 Mar 2011 19:31 UTC; 150 points) (
- LessWrong 2.0 by 9 Dec 2015 18:59 UTC; 127 points) (
- Situating LessWrong in contemporary philosophy: An interview with Jon Livengood by 1 Jul 2020 0:37 UTC; 117 points) (
- Train Philosophers with Pearl and Kahneman, not Plato and Kant by 6 Dec 2012 0:42 UTC; 116 points) (
- Costs and Benefits of Scholarship by 22 Mar 2011 2:19 UTC; 72 points) (
- Being Wrong about Your Own Subjective Experience by 24 Apr 2011 20:24 UTC; 63 points) (
- “Brain enthusiasts” in AI Safety by 18 Jun 2022 9:59 UTC; 63 points) (
- Rationality, Singularity, Method, and the Mainstream by 22 Mar 2011 12:06 UTC; 52 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- When Intuitions Are Useful by 9 May 2011 19:40 UTC; 18 points) (
- Scientist vs. philosopher on conceptual analysis by 20 Sep 2011 15:10 UTC; 17 points) (
- 11 May 2012 23:00 UTC; 16 points) 's comment on Thoughts on the Singularity Institute (SI) by (
- 22 Mar 2011 15:37 UTC; 16 points) 's comment on Can we stop using the word “rationalism”? by (
- 30 Mar 2011 1:44 UTC; 11 points) 's comment on Philosophy: A Diseased Discipline by (
- 30 Mar 2021 16:00 UTC; 9 points) 's comment on Rationalism before the Sequences by (
- [link] Why We Reason (psychology blog) by 18 Apr 2012 11:40 UTC; 8 points) (
- 7 Jan 2014 22:57 UTC; 8 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 2 Apr 2012 15:25 UTC; 5 points) 's comment on Zombies! Zombies? by (
- 29 Nov 2012 20:49 UTC; 5 points) 's comment on Intuitions Aren’t Shared That Way by (
- 7 Jul 2020 0:25 UTC; 4 points) 's comment on 3 suggestions about jargon in EA by (EA Forum;
- 30 Mar 2011 16:53 UTC; 4 points) 's comment on Philosophy: A Diseased Discipline by (
- 18 Nov 2014 16:52 UTC; 3 points) 's comment on Conceptual Analysis and Moral Theory by (
- 21 Jan 2012 0:14 UTC; 3 points) 's comment on The Singularity Institute’s Arrogance Problem by (
- 4 Nov 2017 8:22 UTC; 2 points) 's comment on In defence of epistemic modesty by (EA Forum;
- 23 May 2020 20:57 UTC; 2 points) 's comment on Open & Welcome Thread—December 2019 by (
- 7 Aug 2011 13:31 UTC; 1 point) 's comment on Beware of Other-Optimizing by (
- 28 Aug 2011 6:26 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 11 Jul 2015 4:16 UTC; 1 point) 's comment on Akrasia Tactics Review by (
- 12 Oct 2015 20:08 UTC; 0 points) 's comment on Philosophical schools are approaches not positions by (
- 24 Apr 2014 14:31 UTC; 0 points) 's comment on Self-Congratulatory Rationalism by (
Note the way I speak with John Baez in the following interview, done months before the present post:
http://johncarlosbaez.wordpress.com/2011/03/25/this-weeks-finds-week-313/
I was happy to try and phrase this interview as if it actually had something to do with philosophy.
Although I actually invented the relevant positions myself, on the fly when FAI theory needed it, then Googled around to find the philosophical nearest neighbor.
The fact that you are skeptical about this, and suspect I suppose that I accidentally picked up some analytic descriptivism or mature folk morality elsewhere and then forgot I’d read about it, even though I hadn’t gone anywhere remotely near that field of philosophy until I wanted to try speaking their language, well, that strikes at the heart of why all this praise of “mainstream” philosophy strikes me the wrong way. Because the versions of “mature folk morality” and “reflective equilibrium” and “analytic descriptivism” and “moral functionalism” are never quite exactly right, they are built on entirely different premises of argument and never quite optimized for Friendly-AI thinking. And it seems to me, at least, that it is perfectly reasonable to simply ignore the field of philosophy and invent all these things the correct way, on the fly, and look up the nearest neighbor afterward; some wheels are simple enough that they’re cheaper to reinvent than to look up and then modify.
Can philosophers be useful? Yes. Is it possible and sometimes desirable to communicate with people who’ve previously read philosophy in philosophical standard language? Yes. Is Less Wrong a branch from the mighty tree of mainstream philosophy? No.
With this comment, I think our disagreement is resolved, at least to my satisfaction.
We agree that philosophy can be useful, and that sometimes it’s desirable to speak the common language. I agree that sometimes it is easier to reinvent the wheel, but sometimes it’s not.
As for whether Less Wrong is a branch of mainstream philosophy, I’m not much interested to argue about that. There are many basic assumptions shared by Quinean philosophy and Yudkowskian philosophy in opposition to most philosophers, even down to some very specific ideas like naturalized epistemology that to my knowledge had not been articulated very well until Quine. And both Yudkowskian philosophy and Quinean naturalism spend an awful lot of time dissolving philosophical debates into cognitive algorithms and challenging intuitionist thinking—so far, those have been the main foci of experimental philosophy, which is very Quinean, and was mostly founded by one of Quine’s students, Stephen Stich. Those are the reasons I presented Yudkowskian philosophy as part of the broadly Quinean movement in philosophy.
On the other hand, I’m happy to take your word for it that you came up with most of this stuff on your own, and only later figured out what the philosophers have been calling it, so in another way Yudkowskian philosophy is thoroughly divorced from mainstream philosophy—maybe even more than, say, Nassim Taleb’s philosophical work.
And once we’ve said all that, I don’t think any question remains about whether Less Wrong is really part of a larger movement in philosophy.
Anyway, thanks for this further clarification. I’ve learned a lot from our discussion. And I’m enjoying your interview with Baez. Cheers.
On the general issue of the origin of various philosophical ideas, I had a thought. Perhaps we take a lot of our tacit knowledge for granted in our thinking about attributions. I suspect that abstract ideas become part of wider culture and then serve as part of the reasoning of other people without them explicitly realizing the role of those abstracts. For example, Karl Popper had a concept of “World 3” which was essentially the world of artifacts that are inherited from generation to generation and become a kind of background for the thinking of each successive generation who inherits that culure. That concept of “unconscious ideas” was also found in a number of other places (and has been of course for as far back as we can remember) and has been incorporated into many theories and explanations of varying usefulness. Some of Freud’s ideas have a similar rough feel to them and his albeit unscientific ideas became highly influential in popular culture and influenced all sorts of things, including some productive psychology programs that emphasize influences outside of explcit awareness. Our thinking is given shape in part by a background that we aren’t explicitly aware of and as a result we can’[t always make accurate attributions of intellectual history except in terms of what has been written down. Some of the influence happens outside of our awareness via various mechanisms of implicit or tacit learning. We know a lot more than we realize we know, we “stand on the shoulders of others” in a somewhat obscure sense as well as the more obvious one.
An important implication of this might be that our reasoning starts from assumptions and conceptual schemes that we don’t really think about because it is “intuitive” and appears to each of us as “commonsense.” However it may be that “commonsense” and “intuition” are forms of ubiquitous expertise that differ somewhat between people. If that is the case, then people reason from different starting points and perhaps can reason to different conclusions even when rigorously logical, and this would seemingly support a perspectivist view where logic is not by itself adequate to reconcile differences in opinion.
If that is the case, then it helps explain why we can’t seem to get rid of some fundamental problems just by clarifying concepts and reasoning from evidence. Those operations are themselves shaped by a background. One of the important roles of philosophy may be to give a voice to some of that background, a voice which may not always be scientific (that is, empirical, testable, effectively communicated through mathematics). So it may not be the philosophers who actually make the ideas available ot us, but the philosophers who make them explicit outside of science.
I’m not saying that contradicts the possibly unique value of naturalistic and reductionistic approaches, systematization, etc., just that if we think of philosophy purely in utilitarian terms as a provider of new theories that feed science, we may miss the point of its role in culture and our tracking and understanding of the genesis of ideas.
You say,
and that you prefer to “invent all these things the correct way”.
From this and your preceding text I understand,
that philosophers have identified some meta-ethical theses and concepts similar to concepts and theses you’ve invented all by yourself,
that the philosophers’ theses and concepts are in some way systematically defective or inadequate, and
that the arguments used to defend the theses are different than the arguments which you would use to defend them.
(I’m not sure what you mean in saying the concepts and theses aren’t optimized for Friendly-AI thinking.)
You imply that you’ve done a comprehensive survey, to arrive at these conclusions. It’d be great if you could share the details. Which discussions of these ideas have you studied, how do your concepts differ from the philosophers’, and what specifically are the flaws in the philosophers’ versions? I’m not familiar with these meta-ethical theses but I see that Frank Jackson and Philip Pettit are credited with sparking the debate in philosophy—what in their thinking do you find inadequate? And what makes your method of invention (to use your term) of these things the correct one?
I apologize if the answers to these questions are all contained in your sequences. I’ve looked at some of them but the ones I’ve encountered do not answer these questions.
You disparage the value of philosophy, but it seems to me you could benefit from it. In another of your posts, ‘How An Algorithm Feels From Inside’, I came across the following:
This is false—the claim, I mean, that when you look at a green cup, you are seeing a picture in your visual cortex. On the contrary, the thing you see is reflecting light, is on the table in front of you (say), has a mass of many grams, is made of ceramic (say), and on an on. It’s a cup -it emphatically is not in your brainpan. Now, if you want to counter that I’m just quibbling over the meaning of the verb ‘to see’, that’s fine—my point is that it is you who are using it in a non-standard way, and it behoves you to give a coherent explication of your meaning. The history of philosophical discussions suggests this is not an easy task. The root of the problem is the effort to push the subject/object distinction -which verbs of perception seem to require- within the confines of the cranium. Typically, the distinction is only made more problematic—the object of perception (now a ‘picture in the visual cortex’) still doesn’t have the properties it’s supposed to (greenness), and the subject doing the seeing seems even more problematic. The self is made identical to or resident within some sub-region of the brain, about which various awkward questions now arise. Daniel Dennett has criticized this idea as the ‘Cartesian Theatre’ model of perception.
Having talked to critics of philosophy before, I know such arguments are often met with considerable impatience and derision. They are irrelevant to the understanding being sought, a waste of time, etc. This is fine—it may be true, for many, including you. If this is so, though, it seems to me the rational course is simply to acknowledge it’s concerns are orthogonal to your own, and if you seem to come into collision (as above), to show that your misleading metaphor isn’t really doing any work, and hence is benign. In this case you aren’t re-inventing the wheel in coming up with your own theories, but something altogether different—a skid, maybe.
The community definitely needs to work on this whole “virtue of scholarship” thing.
LW community or the philosophy community?
I was talking about the LW community.
Those names are clearly made-up :)
It’s not Quinean naturalism. It’s logical empiricism with a computational twist. I don’t suggest that everyone go out and read Carnap, though. One way that philosophy makes progress is when people work in relative isolation, figuring out the consequences of assumptions rather than arguing about them. The isolation usually leads to mistakes and reinventions, but it also leads to new ideas. Premature engagement can minimize all three.
To some degree. It might be more precise to say that many AI programs in general are a computational update to Carnap’s The Logical Structure of the World (1937).
But logical empiricism as a movement is basically dead, while what I’ve called Quinean naturalism is still a major force.
I’d actually say the central shared features that you’re identifying- the dissolving of the philosophical paradox instead of reifying it as well as the centrality of observation and science goes back to Hume.
It certainly seems like Logical Positivism/Empiricism to me, which is a problem, because that was a crashing failure.
Philosophy quote of the day:
Aaron Sloman (1978)
According to the link:
So, we have a spectacular mis-estimation of the time frame—claiming 33 years ago that AI would be seen as important “within a few years”. That is off by one order of magnitude (and still counting!) Do we blame his confusion on the fact that he is a philosopher, or was the over-optimism a symptom of his activity as an AI researcher? :)
ETA:
I’m not sure I like the analogy. QM is foundational for physics, while AI merely shares some (as yet unknown) foundation with all those mind-oriented branches of philosophy. A better analogy might be “giving a degree course in biology which includes no exobiology”.
Hmmm. I’m reasonably confident that biology degree programs will not include more than a paragraph on exobiology until we have an actual example of exobiology to talk about. So what is the argument for doing otherwise with regard to AI in philosophy?
Oh, yeah. I remember. Philosophers, unlike biologists, have never shied away from investigating things that are not known to exist.
He didn’t necessarily predict that AI would be seen as important in that timeframe; what he said was that if it wasn’t, philosophers would have to be incompetent and their teaching irresponsible.
Full marks… but let’s be honest, he doesn’t get too many difficulty points for making that prediction...
I didn’t read the whole article. Where did Sloman claim that AI would be seen as important within a few years?
I inferred that he would characterize it as important in that time frame from:
together with a (perhaps unjustified) assumption that philosophers refrain from calling their colleagues “professionally incompetent” unless the stakes are important. And that they generally do what is fair.
When I read posts on Overcoming Bias (and sometimes also LW) discussing various human frailties and biases, especially those related to status and signaling, what often pops into my mind are observations by Friedrich Nietzsche. I’ve found that many of them represent typical OB insights, though expressed in a more poetic, caustic, and disorganized way. Now of course, there’s a whole lot of nonsense in Nietzsche, and a frightful amount of nonsense in the subsequent philosophy inspired by him, but his insight about these matters is often first-class.
I agree with this actually.
Also, how about William James and pragmatism? I read Pragmatism recently, and had been meaning to post about the many bits that sound like they could’ve been cut straight from the sequences—IIRC, there was some actual discussion of making beliefs “pay”—in precisely the same manner as the sequences speak of beliefs paying rent.
Yup.
Quinean naturalism, and especially Quine’s naturalized epistemology, are merely the “fullest” accounts of Less Wrong-ian philosophy to be found in the mainstream literature. Of course particular bits come from earlier traditions.
Parts of pragmatism (Peirce & Dewey) and pre-Quinean naturalism (Sellars & Dewey and even Hume) are certainly endorsed by much of the Less Wrong community. As far as I can tell, Eliezer’s theory of truth is straight-up Peircian pragmatism.
I see it as a closer match to Korzybski by way of Hayakawa.
Eliezer’s philosophy of language is clearly influenced by Korzybski via Hayakawa, but what is Korzybski’s theory of truth? I’m just not familiar.
Maybe I’m out of my depth here. But from a semantic standpoint, I thought that a theory of language pretty much is a theory of truth. At least in mathematical logic with Tarskian semantics, the meaning of a statement is given by saying what conditions make the statement true.
Perplexed,
Truth-conditional accounts of truth, associated with Tarski and Davidson, are popular in philosophy of language. But most approaches to language do not contain a truth-conditional account of truth. Philosophy of language is most reliably associated with a theory of meaning: How is it that words and sentences relate to reality?
You might be right that Eliezer’s theory of truth comes from something like Korzybski’s (now defunct) theory of language, but I’m not familiar with Korzybski’s theory of truth.
My theory of truth is explicitly Tarskian. I’m explicitly influenced by Korzybski on language and by Peirce on “making beliefs pay rent”, but I do think there are meaningful and true beliefs such that we cannot experientally distinguish between them and mutually exclusive alternatives, i.e., a photon going on existing after it passes over the horizon of the expanding universe as opposed to it blinking out of existence.
Thanks for clarifying!
For the record, my own take:
As a descriptive theory of how humans use language, I think truth-conditional accounts of meaning are inadequate. But that’s the domain of contemporary linguistics, anyway—which tends to line up more with the “speech acts” camp in philosophy of language.
But we need something like a Tarskian theory of language and truth in order to do explicit AI programming, so I’m glad we’ve done so much work on that. And in certain contexts, philosophers can simply adopt a Tarskian way of talking rather than a more natural-language way of talking—if they want to.
And I agree about there being meaningful and true beliefs that we cannot experientially distinguish. That is one point at which you and I disagree with the logical positivists and, I think, Korzybski.
I’m only familiar with it through Hayakawa. The reference you provided to support your claim that the General Semantics theory of language is “defunct” says this about the GS theory of truth:
All of which sounds pretty close to Davidson and Tarski to me, though I’m not an expert. And not all that far from Yudkowsky.
I made my comment mentioning Language in Thought and Action before reading your post. I now see that your point was to fit Eliezer into the mainstream of Anglophone philosophy. I agree; he fits pretty well. And in particular, I agree (and regret) that he has been strongly influenced, directly or indirectly, by W. V. O. Quine. I’m not sure why I decided to mention Hayakawa’s book—since it (like the sequences) definitely is too lowbrow to be part of that mainstream. I didn’t mean for my comment to be taken as disagreement with you. I only meant to contribute some of that scholarship that you are always talking about. My point is, simply speaking, that if you are curious about where Eliezer ‘stole’ his ideas, you will find more of them in Hayakawa than in Peirce.
Probably, though Yudkowsky quotes Peirce here.
Korzybski’s theory of language places the source of meaning in non-verbal reactions to ‘basic’ undefined terms, or terms that define each other. This has two consequences for his theory of truth.
First, of course, he thinks we should determine truth using non-verbal experience.
Second, he explicitly tries to make his readers adopt ‘undefined terms’ and the associated reactions from math and science, due to the success of these systems. Korzybski particularly likes the words “structure,” “relation,” and “order”—he calls science structural knowledge and says its math has a structure similar to the world. As near as I can tell, he means by this that if b follows a in the theory then those letters should represent some B and A which have the ‘same’ relation out in the world.
I don’t know that 2011 science rejects his theory of language. His grand attempt to produce a system like Aristotle’s does seem like a sad tale in that, while his verbal formulation of the “logic of probability” seems accurate, he couldn’t apply it despite knowing more than enough math to do so.
From my small but nontrivial knowledge of Quine, he always struck me as having a critically wrong epistemology.
LW-style epistemology looks like this:
Let’s figure out how a perfectly rational being (AI) learns.
Let’s figure out how humans learn.
Let’s use that knowledge to fix humans so that they are more like AIs.
whereas Quine’s seems more like
Let’s figure out how humans learn
which seems to be missing most of the point.
His boat model always struck me as something confused that should be strongly modified or replaced by a Bayesian epistemology in which posterior follows logically and non-destructively from prior, but I may be in the minority in LW on this.
It’s true that Quine lacked the insights of contemporary probability theory and AI, but remember that Quine’s most significant work was done before 1970. Quine was also a behaviorist. He was wrong about many things.
My point was that both Quine and Yudkowsky think that recursive justification bottoms out in using the lens that sees its own flaws to figure out how humans gain knowledge, and correcting mistakes that come in. That’s naturalized epistemology right there. Epistemology as cognitive science. Of course, naturalized epistemology has made a lot of progress since then thanks to the work of Kahneman and Tversky and Pearl and so on—the people that Yudkowsky learned from.
If you’re wondering why I’m afraid of philosophy, look no further than the fact that this discussion is assigning salience to LW posts in a completely different way to I do.
I mean, it seems to me that where I think an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project, or the amount of confusion that it permanently and completely dissipates, all of this here is prioritizing LW posts to the extent that they happen to imply positions on famous ongoing philosophical arguments.
That’s why I’m afraid to be put into any philosophical tradition, Quinean or otherwise—and why I think I’m justified in saying that their cognitive workflow is not like unto my cognitive workflow.
With this comment at least, you aren’t addressing the list of 20+ useful contributions of mainstream philosophy I gave.
Almost none of the items I listed have to do with famous old “problems” like free will or reductionism.
Instead, they’re stuff that (1) you’re already making direct use of in building FAI, like reflective equilibrium, or (2) stuff that is almost identical to the ‘coping with cognitive biases’ stuff you’ve written about so much, like Bishop & Trout (2004), or (3) stuff that is dissolving traditional debates into the cognitive algorithms that produce them, which you seem to think is the defining hallmark of LW-style philosophy, or (4) generally useful stuff like the work on catastrophic risks coming out of FHI at Oxford.
I hope you aren’t going to keep insisting that mainstream philosophy has nothing useful to offer after reading my list. On this point, it may be time for you to just say “oops” and move on.
After all, we already agree on most of the important points, like you said. We agree that philosophy is an incredibly diseased discipline. We agree that people shouldn’t go out and read Quine. We agree that almost everyone should be reading statistics and AI and cognitive science, not mainstream philosophy. We agree that Eliezer Yudkowsky should not read mainstream philosophy. We agree that “their” cognitive workflow is “not like unto” your cognitive workflow.
So I don’t understand why you would continue to insist that nothing (or almost nothing) useful comes out of mainstream philosophy, after the long list of useful things I’ve provided, many of which you are already using yourself, and many more of which closely parallel what you’ve been doing on Less Wrong all along, like dissolving traditional debates into cognitive algorithms and examining how to get at the truth more often through awareness and counteracting of our cognitive biases.
The sky won’t fall if you admit that some of mainstream philosophy is useful, and that you already make use of some of it. I’m not going to go around recommending people join philosophy programs. This is simply about making use of the resources that are out there. Most of those resources are in statistics and AI and cognitive science and physics and so on. But a very little of it happens to come out of mainstream philosophy, especially from that corner of mainstream philosophy called Quinean naturalism which shares lots of (basic) assumptions with Less Wrong philosophy.
As you know, this stuff matters. We’re trying to save the world, here. Either some useful stuff comes out of mainstream philosophy, or it doesn’t. There is a correct answer to that question. And the correct answer is that some useful stuff does come out of mainstream philosophy—as you well know, because you’re already making use of it.
I think it would be good for LessWrong to have a bit more academic philosophers and students of philosophy, to have a slightly higher philosophers/programmers ratio (as long as it doesn’t come with the expectation that everybody should understand a lot of concepts in philosophy that aren’t in the sequences).
I’m late, but… is there substantial chain of cause and effect between the discovery of useful conclusions from mainstream philosophy, and the use of those conclusions by Eliezer? Counter-factually, if those conclusions were not drawn, would it be less likely that Eliezer found them anyway?
Eliezer seems to deny this chain of cause and effect. I wonder to what extent you think such a denial is unjustified.
You still haven’t given an actual use case for your sense of “useful”, only historical priority (the qualifier “come out” is telling, for example), and haven’t connected your discussion that involves the word “useful” to the use case Eliezer assumes (even where you answered that side of the discussion without using the word, by agreeing that particular use cases for mainstream philosophy are a loss). It’s an argument about definition of “useful”, or something hiding behind this equivocation.
I suggest tabooing “useful”, when applied to literature (as opposed to activity with stated purpose) on your side.
Eliezer and I, over the course of our long discussion, have come to some understanding of what would constitute useful. Though, Philosophy_Tutor suggested that Eliezer taboo his sense of “useful” before trying to declare every item on my list as useless.
Whether or not I can provide a set of necessary and sufficient conditions for “useful”, I’ve repeatedly pointed out that:
Several works from mainstream philosophy do the same things he has spent a great deal of time doing and advocating on Less Wrong, so if he thinks those works are useless then it would appear he thinks much of what he has done on Less Wrong is uesless.
Quite a few works from mainstream philosophy have been used by him, so presumably he finds them useful.
I can’t believe how difficult it is to convince some people that some useful things come out of mainstream philosophy. To me, it’s a trivial point. Those resisting this truth keep trying to change the subject and make it about how philosophy is a diseased subject (agreed!), how we shouldn’t read Quine (agreed!), how other subjects are more important and useful (agreed!), and so on.
If it’s not immediately obvious how an argument connects to a specific implementable policy or empirical fact, default is to covertly interpret it as being about status.
Since there are both good and bad things about philosophy, we can choose to emphasize the good (which accords philosophers and those who read them higher status) or emphasize the bad (which accords people who do their own work and ignore mainstream philosophy higher status).
If there are no consequences to this choice, it’s more pleasant to dwell upon the bad: after all, the worse mainstream philosophy does, the more useful and original this makes our community; the better mainstream philosophy does, the more it suggests our community is a relatively minor phenomenon within a broader movement of other people with more resources and prestige than ourselves (and the more those of us whose time is worth less than Eliezer’s should be reading philosophy journals instead of doing something less mind-numbing).
I think this community is smart enough to avoid many such biases if given a real question with a truth-value, but given a vague open question like “Yay philosophy—yes or no?” of course we’re going to take the side that makes us feel better.
I think the solution is to present specific insights of Quinean philosophy in more depth, which you already seem like you’re planning to do.
Maybe my original post gave the wrong impression of “which side I’m on.” (Yay philosophy or no?) Like Quine and Yudkowsky, I’ve generally considered myself an “anti-philosophy philosopher.”
But you’re right that such vague questions and categorizations are not really the point. The solution is to present specific useful insights of mainstream philosophy, and let the LW community make use of them. I’ve done that in brief, here, and am working on posts to elaborate some of those items in more detail.
What disappoints me is the double standard being used (by some) for what counts as “useful” when presented in AI books or on Less Wrong, versus what counts as “useful” when it happens to come from mainstream philosophy.
I don’t think there is double standard involved.
There are use cases (plans) that distinguish LW from mainstream philosophy that make philosophy less useful for those plans. There are other use cases where philosophy would be more useful. Making an overall judgment would depend on which use cases are important.
The concept of “useful” that leads to a classification which marks philosophy “not useful” might be one you don’t endorse, but we already discussed a few examples that show that such concepts can be natural, even if you’d prefer not to identify them with “usefulness”.
A double standard would filter evidence differently when considering the things it’s double-standard about. If we are talking about particular use cases, I don’t think there was significant distortion of attention paid for either case. A point where evidence could be filtered in favor of LW would be focus on particular use cases, but that charge depends on the importance of those use cases and their alternatives to the people selecting them. So far, you didn’t give such a selection that favors philosophy, and in fact you’ve agreed on the status of the use cases named by others.
So, apart from your intuition that “useful” is an applicable label, not much about the rules of reasoning and motivation about your claim was given. Why is it interesting to discuss whether mainstream philosophy is “useful” in the sense you mean this concept? If we are to discuss it, what kinds of arguments would tell us more about this fact? Can you find effective arguments about other people’s concepts of usefulness, given that the intuitive appeals made so far failed? How is your choice of concept of “usefulness” related to other people’s concepts, apart from the use of the same label? (Words/concepts can be wrong, but to argue that a word is wrong with a person who doesn’t see it so would require a more specific argument or reasoning heuristic.)
Since there seems to be no known easy way of making progress on discussing each other’s concepts, and the motivation seems to be solely to salve intuition, I think there is no ground for further object-level argument.
I love to read and write interesting things—which is why I take to heart Eliezer’s constant warning to be wary of things that are fun to argue.
But interestingness was not the point of my post. Utility to FAI and other Less Wrong projects was the point. My point was that mainstream philosophy sometimes offers things of utility to Less Wrong. And I gave a long list of examples. Some of them are things (from mainstream philosophy) that Eliezer and Less Wrong are already making profitable use of. Others are things that Less Wrong had not mentioned before I arrived, but are doing very much the same sorts of things that Less Wrong values—for example dissolution-to-algorithm and strategies for overcoming biases. Had these things been written up as Less Wrong posts, it seems they’d have been well-received. And in cases where they have been written up as Less Wrong posts, they have been well-received. My continuing discussion in this thread has been to suggest that therefore, some useful things do come from mainstream philosophy, and need not be ignored simply because of the genre or industry they come from.
By “useful” I just mean “possessing utility toward some goal.” By “useful to Less Wrong”, then, I mean “possessing utility toward a goal of Less Wrong’s/Eliezer’s.” For example, both reflective equilibrium and Epistemology and the Psychology of Human Judgment possess that kind of utility. That’s a very rough sketch, anyway.
But no, I don’t have time to write up a 30-page conceptual analysis of what it means for something to be “useful.”
But I think I still don’t understand what you mean. Maybe an example would help. A good one would be this: Is there a sense in which reflective equilibrium (a theory or process that happens to come from mainstream philosophy) is not useful to Eliezer, despite the fact that it plays a central role in CEV, his plan to save humanity from unfriendly AI?
Another one would be this: Is there a sense in which Eliezer’s writing on how to be aware of and counteract common cognitive biases is useful, but the nearly identical content in Bishop & Trout’s Epistemology and the Psychology of Human Judgment (which happens to come from mainstream philosophy) is not useful?
(I edited the grandparent comment substantially since publishing it, so your reply is probably out of date.)
Okay, I updated my reply comment.
Isn’t the smart move there not to play? What would make that the LW move?
Sounds plausible, and if true, a useful observation.
“Yay philosophy—yes or no?” and questions of that ilk seem like an interesting question to actually ask people.
You could, for instance, make a debate team lay out the pro and con positions.
A lot of the “nay philosophy” end up doing philosophy, even while they continue to say “nay philosophy”. So I have a hard time taking the opinion at face value.
Moreover it’s not like there is one kind of thinking, philosophy, and another kind of thinking, non-philosophy. Any kind of evidence or argument could in principle be employed by someone calling himself a philosopher—or, inversely, by someone calling himself a non-philosopher. If you suddenly have a bright idea and start developing it into an essay, I submit that you don’t necessarily know whether, once the idea has fully bloomed, it will be considered philosophy or non-philosophy.
I don’t know whether it’s true that science used to be considered a subtopic of philosophy (“natural philosophy”), but it seems entirely plausible that it was all philosophy but that at some point there was a terminological exodus, when physicists stopped calling themselves philosophers. In that older, more inclusive sense, then anyone who says “nay philosophy” is also saying “nay science”. Keeping that in mind, what we now call “philosophy” might instead be called, “what’s left of philosophy after the great terminological exodus”.
Of course “what’s left” is also called “the dregs”. In light of that, what we all “philosophy” might instead be called “the dregs of philosophy”.
That is exactly true. The old term for what we nowadays call “natural science” was “natural philosophy.” There are still relics of this old terminology, most notably that in English the title “doctor of philosophy” (or the Latin version thereof) is still used by physicists and other natural scientists. The “terminological exodus” you refer to happened only in the 19th century.
This is still happening, right? I once had a professor who suggested that philosophy is basically the process of creating new fields and removing them from philosophy—thence logic, mathematics, physics, and more recently linguistics.
Thats an interesting definition of philosophy, but I think philosophy does far more than that.
That’s true, I may have overstated his suggestion—the actual context was “why has philosophy made so little progress over the past several thousand years?” (“Because every time a philosophical question is settled, it stops being a philosophical question.”)
This provides a defense of the claim that luke was attacking earlier on the thread, that
“It’s totally reasonable to expect philosophy to provide several interesting/useful results [in one or a few broad subject areas] and then suddenly stop.”
Possibly, yes, but I’d expect philosophy to stop working on a field only after it’s recognized as its own (non-philosophy) area (if then) - which, for example, morality is not.
Is theology a branch of philosophy?
Errr… it seems to me that theology in many ways acts like philosophy, with the addition of stuff like exegesis and apologetics… but any particular religion’s theology is distinct from the set of things we’d call “philosophy” as a monolithic institution. This is far from my area of expertise, however!
I’m worried part of this debate is just about status. When someone comes in and says “Hey, you guys should really pay more attention to what x group of people with y credentials says about z” it reminds everyone here, most of whom lack y credentials that society doesn’t recognize them as an authority on z and so they are some how less valuable than group x. So there is an impulse to say that z is obvious, that z doesn’t matter or that having y isn’t really a good indicator of being right about z. That way, people here don’t lose status relative to group x.
Conversely, members of group x probably put money and effort into getting credential y and will be offended by the suggestion that what they know about doesn’t matter, that it is obvious or that their having credential y doesn’t indicate they know anything more than anyone else.
Me, I have an undergraduate degree in philosophy which I value so I’m sure I get a little defensive when philosophy is mocked or criticized around here. But most people here probably fit in the first category. Eliezer, being a human being like everybody else, is likely a little insecure about his lack of a formal education and perhaps particularly apt to deny an academic community status as domain experts in a fields he’s worked in (even though he is certainly right that formal credentials are overvalued).
I think a lot of this argument isn’t really a disagreement over what is valuable and what isn’t- it’s just people emphasizing or de-emphasizing different ideas and writers to make themselves look higher status.
...
These statements have no content they just say “My stuff is better than your stuff”.
I think such debates unavoidably include status motivations. We are status-oriented, signaling creatures. Politics mattered in our ancestral environment.
Of course you know that I never said anything like either of the parody quotes provided. And I’m not trying to stay Quinean philosophy is better than Less Wrong. The claim I’m making is a very weak claim: that some useful stuff comes out of mainstream philosophy, and Less Wrong shouldn’t ignore it when that happens just because the source happens to be mainstream philosophy.
Yes. But you’re right so that side had to be a strawman, didn’t it?
I’m sorry; what do you mean?
Since I hold a pretty strong pro-mainstream philosophy position (relative to others here, perhaps including yourself) I was a little more creative with that parody than in the other. I was attempting to be self-deprecating to soften my criticism (that the reluctance to embrace your position stems from status insecurities) so as to not set of tribal war instincts.
Though on reflection it occurs to me that since I didn’t state my position in that comment or in this thread and have only talked about it in comments (some before you even arrived here at Less Wrong) it’s pretty unlikely that you or anyone else would remember my position on the matter, in which case my attempt at self-deprecation might look like a criticism of you.
Yeah… I’ve apparently missed something important to interpreting you.
For the record, if you hold “a pretty strong pro-mainstream philosophy position” then you definitely are more in favor of mainstream philosophy than I am. :)
It’s all relative. Surround me with academics and I sound like Eliezer.
But yes, once or twice I’ve even had the gall to suggest that some continental philosophers are valuable.
And for that, two days in the slammer! :)
I agree that you’ve agreed on many specific things. I suggest that the sense of remaining disagreement is currently confused through refusing to taboo “useful”. You use one definition, he uses a different one, and there is possibly genuine disagreement in there somewhere, but you won’t be able to find it without again switching to more specific discussion.
Also, taboo doesn’t work by giving a definition, instead you explain whatever you wanted without using the concept explicitly (so it’s always a definition in a specific context).
For example:
Instead of debating this point of the definition (and what constitutes “being used”), consider the questions of whether Eliezer agrees that he was influenced (in any sense) by quite a few works from mainstream philosophy (obviously), whether they provided insights that would’ve been unavailable otherwise (probably not), whether they happen to already contain some of the same basic insights found elsewhere (yes), whether they originate them (it depends), etc.
It’s a long list, not as satisfying as the simple “useful/not”, but this is the way to unpack the disagreement. And even if you agree on every fact, his sense of “useful” can disagree with yours.
I’ll wait to see if Eliezer really thinks we aren’t on the same page about the meaning of ‘useful’.
If reflective equilibrium, which plays a central role in Eliezer’s plan (CEV) to save humanity, isn’t useful, then I will be very surprised, and we will seem to be using different definitions of the term “useful.”
Has he repudiated the usefulness of reflective equilibrium (or of the concept, or the term)? I recall that he’s used it himself in some of the more recent summaries of CEV.
Are you, in your view, having The Problem with Non-Philosophers again?
It seems to me that the disagreement might be over the adjective “mainstream”. To me, that connotes what’s being mentioned (not covered in detail, merely mentioned) in broad overviews such as freshman introductory classes or non-major classes at college. As an analogy, in physics both general relativity and quantum mechanics are mainstream. They get mentioned in these contexts, though not, of course, covered. Something like timeless physics does not.
How much of the standard philosophy curriculum covers Quinean Naturalism?
I dunno, I think Eliezer and I are clear on what mainstream philosophy is. And if anything is mainstream, it’s John Rawls and Oxford University professors whose work Eliezer is already making use of.
Well, when I see:
That does not make me think that “mainstream philosophy” as a whole is doing useful work. Localized individuals and small strains appear to be. But even when the small strains are taken seriously in mainstream philosophy, that’s not the same as mainstream philosophy doing said work, and labeling any advances as “here’s mainstream philosophy doing good work” seems to be misleading.
No, mainstream philosophy “as a whole” is not doing useful work. That’s what the central section of my original post was about: Non-Quinean philosophy, and how its entire method is fundamentally flawed.
Even quite a lot of Quinean naturalistic philosophy is not doing useful work.
I’m not trying to mislead anybody. But Eliezer has apparently taken the extreme position that mainstream philosophy in general is worthless, so I made a long list of useful things that have come from mainstream philosophy—and some of it is not even from the most productive strain of mainstream philosophy, what I’ve been calling “Quinean naturalism.” Useful things sometimes come from unexpected sources.
In the above quote the following replacements have been made. philosophy → religion Quinean → Christian
There are many ideas from religion that are not useless. It is not often the most productive source to learn from either however. Why filter ideas from religion texts when better sources are available or when it is easier to recreate them in within in a better framework; a framework that actual justifies the idea. This is also important because in my experience people fail to filter constantly and end up accepting bad ideas.
I do not see EY arguing that main stream philosophy has not useful nuggets. I seem him arguing that filtering for those nugets in general makes the process too costly. I see you arguing that “Quinean naturalism” is a rich vien of philosophy and worth mining for nuggets. If you want to prove the worth of mine “Quinean naturalism” you have to display nuggets that EY has not found through better means already.
I did list such nuggets that EY has not found through other means already, including several instances of “dissolution-to-algorithm”, which EY seems to think of as the hallmark of LW-style philosophy.
I wouldn’t call mainstream philosophy a “rich vein” that is (for most people) worth mining for nuggets. I’ve specifically said that people will get far more value reading statistics and AI and cognitive science. I’ve specifically said that EY should not be mining mainstream philosophy. What I’m saying is that if useful stuff happens to come from mainstream philosophy, why ignore it? It’s people like myself who are already familiar with mainstream philosophy, and for whom it doesn’t take much effort to list 20+ useful contributions of mainstream philosophy, who should bring those useful nuggets to the attention of Less Wrong.
What seems strange to me is to draw an arbitrary boundary around mainstream philosophy and say, “If it happens to come from here, we don’t want it.” And I think Eliezer already agrees with this, since of course he is already making use of several things from mainstream philosophy. But on the other hand, he seems to be insisting that mainstream philosophy has nothing (or almost nothing) useful to offer.
In that post you labeled that list as “useful contributions of mainstream philosophy:” Which does not fit the criteria of nuggets not found by other means. Nor “here are things you have not figured out yet” or “see how this particular method is simpler and more elegant then the one you are currently using.” This is similar to what I think EY is expressing in: Show me this field’s power!
At list of 20 topics that are similar to LW is suggestive but not compelling. Compelling would be more predictive power or correct predictions where LW methods have been known to fail. Compelling would be just one case covered in depth fitting the above criteria. Frankly, and not ment to reflect on you, listing 20 topics that are suggestive reminds me of fast talk manipulation and/or an infomercial. I want to see a conversation digging deep on one topic. I want depth of proof not breadth, because breadth by itself is not compelling only suggestive.
I see you repeating this in many places, but I have yet to see EY suggest the useful parts of philosophy should be ignored.
I see EY arguing philosophy is a field “whose poison a novice should avoid”. Note the that novices should avoid, not that well grounded rationalists should ignore. I have followed the conversations of EY’s and I do not see him saying what you assert. I see you repeatedly asserting he or LW in general is though. In theory it should not be hard to dissolve the problem if you can provide links to where you believe this assertions have been made.
I don’t understand.
Explanation of cognitive biases and how to battle against them on Less Wrong? “Useful.”
Explanation of cognitive biases and how to battle against them in a mainstream philosophy book? “Not useful.”
Dissolution of common (but easy) philosophical problem like free will to cognitive algorithm on Less Wrong? “Useful, impressive.”
Dissolution of common (but easy) philosophical problems in mainstream philosophy journals? “Not useful.”
Is this seriously what is being claimed? If it’s not what’s being claimed, then good—we may not disagree on anything.
Also: as I stated, several of the things I listed are already in use at Less Wrong, and have been employed in depth. Is this not compelling for now?
I’m planning in-depth explanations, but those take time. So far I’ve only done one of them: on SPRs.
As for my interpretation of Eliezer’s views on mainstream philosophy, here are some quotes:
One: “It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.” But maybe this doesn’t mean to avoid mainstream philosophy entirely. Maybe it just means that most people should avoid mainstream philosophy, which I agree with.
Two: “I expect [reading philosophy] to teach very bad habits of thought that will lead people to be unable to do real work.”
Three: “only things of that level [dissolution to algorithm] are useful philosophy. Other things are not philosophy or more like background intros.” Reflective equilibrium isn’t “of that level” of dissolution to cognitive algorithm, in any way that I can tell, and yet it plays a useful role in Eliezer’s CEV plan to save humanity. Epistemology and the Psychology of Human Judgment doesn’t say much about dissolution to cognitive algorithm, and yet its content reads like a series of Less Wrong blog posts on overcoming cognitive biases with “ameliorative psychology.” If somebody claims that those Less Wrong posts are useful but the Epistemology book isn’t, I think that’s a blatant double standard. And it seems that Eliezer in this quote is claiming just that, though again, I’m not clear what it means for something to be “of that level” of dissolution to algorithm.
And then, in his first comment on this post, Eliezer opened with: “I’m highly skeptical.” I took that to be a response to my claim that “rationalists need not ignore mainstream philosophy,” but maybe he was responding to some other claim in my original post.
But if I’ve been misinterpreting Eliezer this whole time, he hasn’t told me so. I’d sure appreciate that. That would be the simplest way to clear this up.
Here is one interpretation.
The standard sequences explanation of cognitive biases and how to battle against them on Less Wrong? “Useful.”
Yet another explanation of cognitive biases and how to battle against them in a mainstream philosophy book? “Not useful.”
Dissolution of difficult philosophical problem like free will to cognitive algorithm on Less Wrong? “Useful, impressive.”
Continuing disputation about difficult philosophical problems like free will in mainstream philosophy journals? “Not useful.”
Dissolution of common (but easy) philosophical problem arising from language misuse in mainstream philosophy journals? “Not useful.”
Explanation of how to dissolve common (but easy) philosophical problems arising from language misuse in LessWrong? “Useful”.
Good stuff of various kinds, surrounded by other good stuff on LessWrong? “Useful”.
Good stuff of various kinds, surrounded by error, confusion, and nonsense in mainstream philosophy journals? “Not useful.”
I’m not sure I agree with all of this, but it is pretty much what I hear Eliezer and others saying.
Yeah, if that’s what’s being claimed, that’s the double standard stuff I was talking about.
Of course there’s error, confusion, and nonsense in just about any large chunk of literature. Mainstream philosophy is particularly bad, but of course what I plan to do is pluck the good bits out and share just those things on Less Wrong.
I no longer remember your original post did you get that format from Perplexed? Or did he get it from you?
You state Perplexed example i a double standard here. Perplexed discribes what is happen LW as different from what happens in mainstream philosophy, which does not fit the standard definition of double standard. Double standard: a rule applied differently to essentially the same thing/idea/group. Perplexed statements imply that LW and mainstream philosophy are considerably different which does not fit the description of a double standard.
As of yet I have not interpreted anything on LW as meaning the content of the quote above.
No it is not compelling. In science a theory which merely reproduces previous results is not compelling only suggestive. A new theory must have predictive power in areas the old one did not or be simpler(aka:more elegant) to be considered compelling. That is how you show the power of a new theory.
Your assertion was:
Your quote one does not seem to support your assertion by your own admission. My interpretation was most people should avoid mainstream philosophy, perhaps the vast majority and certainly novices. If possibly learn from a better source, since there is a vast amount from better sources and there is a vast amount of work to be done with those sources why focus on lesser sources?
This does not support your assertion either. It only claims the methods of mainstream philosophy are bad habits for people who want to get things done.
This one does not seem to a “daw arbitrary boundary” either so it does not support your assertion. Maybe a boundary but then EY then does on to describe the boundary so you have not supported your discriptor of arbitrary.
I think the difference more and less useful is defiantly being claimed. Having everything in one self consistant system has many advantages. Only one set of terminology to learn. It is easier to build groups when everyone is using or familiar with the same terminology.
Out of your three quotes I do not see any “arbitrary boundry” being drawn by EY. He is drawing a boundary, but in no way do I see it as arbitrary. This boundary and why it is draw is a point that you do not seem to understand EY’s reasoning on, otherwise you would do your best to describe the algorithm that EY used to draw the boundary and then show how it is wrong rather then just calling it arbitrary.
Really I would have thought you main assertion was:
You have not shown Quinean naturalism and the rest are a “centralized repository of reductionist-grade naturalistic cognitive philosophy” and you will have to do a proof with depth, which you have not provided but are working on, to show this. So his skepticism is warranted, justified, prudent and seems like a reasonable barrier to unproven ideas.
I think he has pointed it out. The three quotes above do not support your assertion or “arbitrary”. The difference is at a basic definition and methodological level.
And I just read your resolution as I was writing this post. Frankly It really seem like you jumped to conclusions on EY’s position and level of arbitrariness to a degree which caused inefficiency. My main curiosity is now why you think you jumped to conclusions and what you are going to do to prevent it from happening again.
Yeah, I just disagree with your comment from beginning to end.
Yeah, and my claim is that LW content and some useful content from mainstream philosophy is not relevantly different, hence to praise one and ignore the other is to apply a double standard. Epistemology and the Psychology of Human Judgment, which reads like a sequence of LW posts, is a good example. So is much of the work I listed that dissolves traditional philosophical debates into the cognitive algorithms that produce the conflicting intuitions that philosophers use to go in circles for thousands of years.
This is a change of subject. I was talking about the usefulness of certain work in mainstream philosophy already used by Less Wrong, not proposing a new scientific theory. If your point applied, it would apply to the re-use of the ideas on Less Wrong, not to their origination in mainstream philosophy.
The strongest support for my interpretation of EY comes from quote #3, for reasons I explained in detail and you ignored. I suspect much of our confusion came from Eliezer’s assumption that I was saying everybody should go out and read Quinean philosophy, which of course I never claimed and in fact have specifically denied.
In any case, EY and I have come to common ground, so this is kinda irrelevant.
I’m fine with that. What counts as a ‘centralized repository’ is pretty fuzzy. Quinean naturalism counts as a ‘centralized repository’ in my meaning, but if Eliezer means something different by ‘centralized repository’, then we have a disagreement in words but not in fact on that point.
In the mind of EY, i assume, and some others there is a difference. If the difference is not relevant there would be a double standard. If there is a relevant difference no double standard exists. I did not see you point out what that difference was and why it was not relevant before calling it a double standard.
Not a change of subject at all. Just let you know what standards I use for judging something suggestive vs compelling and that I think EY might be using a similar standard. Just answering your question “Is this not compelling for now?”, a no with exposition. I was giving you the method by which I often judge how useful a work is and suggesting that EY may use a similar method. If so it would explain some of why you were not communicating well.
It is to be applied within the development of an individuals evolving beliefs. So someone holding LW beliefs then introduce to mainstream philosophy would use this standard before adopting mainstream philosophy’s beliefs.
I do not like the idea, I think it is unproductive, of having conversation with people who think they magically know what I pay attention to and what I do not. If you meant that I did not address your point please say so and how instead.
I did not ignore it. I did think it supported an argument that EY draws a boundary between mainstream philosophy and LW, but did not support the argument that he drew a arbitrary boundary.
My interpretation was that he skeptical with the grade of repository not the centralness of it.
I don’t understand the distinction you’re making. These two statements mean the exact same thing to me: in general, mainstream philosophy is useless, though exceptions exist.
Admittedly. That’s not a good reason to look there, until the expected sources are exhausted.
What I’m trying to say is that the vast majority of mainstream philosophy is useless, but some of it is useful, and I gave examples.
I’ve also repeatedly agreed that most people should not be reading mainstream philosophy. Much better to learn statistics and AI and cognitive science. But for those already familiar with philosophy, for whom it’s not that difficult to name 20 useful ideas from mainstream philosophy, then… why not make use of them? It makes no sense to draw an arbitrary boundary around mainstream philosophy and say “If it comes from here, I don’t want it.” That’s silly.
I don’t understand the distinction you’re making. These two statements mean the exact same thing to me: in general, mainstream philosophy is useless, though exceptions exist.
I’ve frequently been criticized for suggesting that you hold that attitude. The usual response is that LW is not about friendly AI or has not much to do with the SIAI.
I don’t think you’re being fair to a lot of philosophers. I think you’re being fair to some philosophers, the ones who are sowing confusion. But you can’t just wave away the sophists, the charlatans, with a magic wand. They are out there creating confusion and drawing people away from useful and promising lines of thought. There are other philosophers out there who are doing what they can to limit the damage.
It’s a bit like war. Think of yourself as a scientist who is trying to build a rocket that will take us to Mars. But in the meantime there is a war going on. You might say, “this war is not helpful, because a stray missile might blow up my rocket, damn those generals and their toys.” But the problem is, without the generals like Dennett who are protecting your territory, your positions, the enemy generals will overrun your project and strip your rocket for parts.
You may think that the philosophers don’t matter, that they are just arguing in obscurity among themselves, but I don’t think that’s the case. I think that there is a significant amount of leakage, that ideas born and nurtured in the academy frequently spread to the wider society and infect essentially everyone’s way of thinking.
Who cares when his work was done. We want to know how to find work that helps us to understand things today. It’s not about how smart he was, but about how much his ideas can help us.
And my answer is “not much.” Like I say, all the basics of Quinean philosophy are already assumed by Less Wrong. I don’t recommend anyone read Quine. It’s (some of) the stuff his followers have done in the last 30 years that is useful—both stuff that is already being used by SIAI people, and stuff that is useful but (previously) undiscovered by SIAI people. I listed some of that stuff here.
What’s wrong with behaviorism? I was under the impression that behaviorism was outdated but when my daughter was diagnosed as speech-delayed and borderline autistic we started researching therapy options. The people with the best results and the best studies (those doing ‘applied behavior analysis’) seem to be pretty much unreconstructed Skinnerists. And my daughter is making good progress now.
I’ll take flawed philosophy with good results over the opposite any day of the week. But I’m still curious about flaws in the philosophy.
Personally, I’m finding that avoiding anthropomorphising humans, i.e. ignoring the noises coming out of their mouths in favour of watching their actions, pays off quite well, particularly when applied to myself ;-) I call this the “lump of lard with buttons to push” theory of human motivation. Certainly if my mind had much effect on my behaviour, I’d expect to see more evidence than I do …
I take exception to that: I have a skeletal structure, dammit!
I think the reference is to the brain rather than to the whole body.
(blink)
(nods) Yes, indeed.
Exception withdrawn.
Well played!
It sounds like what you are describing is rationalization, either doing it yourself or accepting people’s rationalization about themselves.
Pretty much. I’m saying “mind” for effect, and because people think the bit that says “I” has much more effect than it appears to from observed behaviour.
Yep. Anthropomorphizing humans is a disasterously wrong thing to do. Too bad everyone does it.
No, they just look like they’re doing it; saying humans are athropomorphizing would attribute more intentionality to humans than is justified by the data.
Well, the mind seems to. I’m using “mind” here to mean the bit that says “I” and could reflect on itself it if it bothered to and thinks it runs the show and comes up with rationalisations for whatever it does. Listening to these rationalisations, promises, etc. as anything other than vague pointers to behaviour is exceedingly foolish. Occasionally you can encourage the person to use their “mind” less annoyingly.
I think they anthropomorphise as some sort of default reflex. Possibly somewhere halfway down the spinal cord, certainly not around the cerebrum.
I may be wrong, but I think that SilasBarta is pointing out, maybe with some tongue-in-cheek, that you can’t accuse humans of anthropomorphizing other humans without yourself being guilty of anthropomorphizing those humans whom you accuse.
Edit: Looks like this was the intended reading.
I am finding benefits from trying not to anthropomorphise myself. That is, rather than thinking of my mind as being in control of my actions, I think of myself as a blob of lard which behaves in certain ways. This has actually been a more useful model, so that my mind (which appears to be involved in typing this, though I am quite ready to be persuaded otherwise) can get the things it thinks it wants to happen happening.
I was joking. :-P
Ha ha only serious ;-p
.
I’d watch their behaviour, which I would also have classed as expression of the intent. Do they show they care? That being the thing you actually want.
May I recommend Dennett’s “Skinner Skinned”, in Brainstorms?
Okay, I read it. It’s funny how Dennett’s criticism of Skinner partially mirrors Luke’s criticism of Eliezer. Because Skinner uses terminology that’s not standard in philosophy, Dennett feels he needs to be “spruced up”.
“Thus, spruced up, Skinner’s position becomes the following: don’t use intentional idioms in psychology” (p. 60). It turns out that this is Quine’s position and Dennett sort of suggests that Skinner should just shut up and read Quine already.
Ultimately, I can understand and at least partially agree with Dennett that Skinner goes too far in denying the value of mental vocabulary. But, happily, this doesn’t significantly alter my belief in the value of Skinner type therapy. People naturally tend to err in the other direction and ascribe a more complex mental life to my daughter than is useful in optimizing her therapy. And I still think Skinner is right that objections to behaviorist training of my daughter in the name of ‘freedom’ or ‘dignity’ are misplaced.
Anyway, this was a useful thing to read—thank you, ciphergoth!
Thank you, holding the book in my hand and reading it now.
No, I’m talking about behaviorist psychology. Behaviorist psychology denied the significance (and sometimes the existence) of cognitive states. Showing that cognitive states exist and matter was what paved the way to cognitive science. Many insights from behaviorist psychology (operant conditioning) remain useful, but it’s central assumption is false, and it must be false for anyone to be doing cognitive science.
Okay, but now I’m getting a bit confused. You seem to me to have come out with all the following positions:
The worthwhile branch of philosophy is Quinean. (this post)
Quine was a behaviorist. (a comment on this post)
Behaviorism denies the possibility of cognitive science. (a comment on this post)
The worthwhile part of philosophy is cognitive science. (“for me, philosophy basically just is cognitive science”—Lukeprog)
Those things don’t seem to go well together. What am I misunderstanding?
Quinean naturalism does not have an exclusive lock on useful philosophy, but it’s the most productive because it starts from a bunch of the right assumptions (reductionism, naturalized epistemology, etc.)
Like I said, Quine was wrong about lots of things. Behaviorism was one of them. But Quine still saw epistemology as a chapter of the natural sciences on how human brains came to knowledge—the field we now know as “cognitive science.”
Quine apparently said, “I consider myself as behavioristic as anyone in his right mind could be”. That sounds good, can I subscribe to that?
Bayesian inference is not a big step up from Laplace, and the idea of an optimal model that humans should try to approximate is a common philosophical position.
Thanks so much. I didn’t know about Quine, and from what you’ve quoted it seems quite clearly in the same vein as LessWrong.
Also, out of curiosity, do you know if anything’s been written about whether an agent (natural or artificial) needs goals in order to learn? Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes—does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?
What you care about determines what your explorations learn about. An AI that didn’t care about anything you thought was important, even instrumentally (it had no use for energy, say) probably wouldn’t learn anything you thought was important. A probability-updater without goals and without other forces choosing among possible explorations would just study dust specks.
That was my intuition. Just wanted to know if there’s more out there.
What, you mean in mainstream philosophy? I don’t think mainstream philosophers think that way, even Quineans. The best ones would say gravely, “Yes, goals are important” and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.
I actually don’t think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.
Yes, that’s about right.
AI research is where to look in regards to your question, SarahC. Start with chapter 2 and the chapters with ‘decisions’ in the title in AI: A Modern Approach.
Thank you!
My first exposure was his mathematical logic book. At the time, I didn’t even realize he had a reputation as a philosopher per se. (I knew from the back cover of the book that he was in the philosophy department at Harvard, but I just assumed that that was where anyone who got sufficiently “foundational” about their mathematics got put.)
Ah, see, when I learned a little logic, I shuddered, muttered “That is not dead which can unsleeping lie,” and moved on. I’ll come back to it if it ever seems useful though.
Yah, I sometimes joke that logicians are viewed by mathematicians in the same way that mathematicians are viewed by normal people. Logic makes complete sense to me, but some of my professional mathematician friends cannot understand my tastes at all. I, on the other hand, cannot understand how one can get interested in homological algebra or other such things, when there are all these really pressing logical issues to solve :-)
That is exactly why I enjoy learning about logic.
Will Sawin, aspiring necromancer… That should be on your business card.
I should have a business card.
Could you clarify what you mean? When I parse your second paragraph, it comes across to my mind as three or four separate questions...
Ok, this is actually an area on which I’m not well-informed, which is why I’m asking you instead of “looking it up”—I’d like to better understand exactly what I want to look up.
Let’s say we want to build a machine that can form accurate predictions and models/categories from observational data of the sort we encounter in the real world—somewhat noisy, and mostly “uninteresting” in the sense that you have to compress or ignore some of the data in order to make sense of it. Let’s say the approach is very general—we’re not trying to solve a specific problem and hard-coding in a lot of details about that problem, we’re trying to make something more like an infant.
Would learning happen more effectively if the machine had some kind of positive/negative reinforcement? For example, if the goal is “find the red ball and fetch it” (which requires learning how to recognize objects and also how to associate movements in space with certain kinds of variation in the 2d visual field) would it help if there was something called “pain” which assigned a cost to bumping into walls, or something called “pleasure” which assigned a benefit to successfully fetching the ball?
Is the fact that animals want food and positive social attention necessary to their ability to learn efficiently about the world? We’re evolved to narrow our attention to what’s most important for survival—we notice motion more than we notice still figures, we’re better at recognizing faces than arbitrary objects. Is it possible that any process needs to have “desires” or “priorities” of this sort in order to narrow its attention enough to learn efficiently?
To some extent, most learning algorithms have cost functions associated with failure or error, even the one-line formulas. It would be a bit silly to say the Mumford-Shaw functional feels pleasure and pain. So I guess there’s also the issue of clarifying exactly what desires/values are.
Practical importance for what purpose? Whatever that purpose is, adding heuristics that optimize the learning heuristics for better fulfillment of that purpose would be fruitful for that purpose.
It would be of practical importance to the extent the original implementation of the learning heuristics is suboptimal, and to the extent the implementable learning-heuristic-improving heuristics can work on that. If you are talking of autonomous agents, self-improvement is a necessity, because you need open-ended potential for further improvement. If you are talking about non-autonomous tools people write, it’s often difficult to construct useful heuristic-improvement heuristics. But of course their partially-optimized structure is already chosen while making use of the values that they’re optimized for, purpose in the designers.
What do you mean by a goal? Or learning?
I’m highly skeptical. I suspect that you may have failed to distinguish between sensory empiricism, which is a large standard movement, and the kind of thinking embodied in How An Algorithm Feels From the Inside which I’ve never seen anywhere else outside of Gary Drescher (and rumors that it’s in Dennett books I haven’t read).
Simple litmus test: What is the Quinean position on free will?
“It’s nonsense!” = what I think standard “naturalistic” philosophy says
“If the brain uses the following specific AI-ish algorithms without conscious awareness of it, the corresponding mental ontology would appear from the inside to generate the following intuitions and apparent impossibilities about ‘free will’...” = Less Wrong / Yudkowskian
Eliezer,
I’m not trying to say that you haven’t made genuine contributions. Making genuine contributions in the Quinean path is what I mean when you say your work is part of that movement. And certainly, you speak a different language—the language of algorithms and AI rather than that of analytic philosophy. (Though there are quite a few who are doing philosophy in the language of AI, too: Judea Pearl is a shining example.)
‘How an algorithm feels from the inside’ is an important insight—an important way of seeing things. But your factual claims about free will are not radical. You agree with all naturalists that we do not have libertarian free will. We have no power to cause effects in the world without ourselves being fully caused, because we are fully part of nature. And you agree with naturalists that we are, nonetheless, able to deliberate about our actions. And that deliberation can, of course, affect the action we eventually choose. Our beliefs and desires affect our decisions, too.
Your differences with Quine look, to me at least, more like the differences that Quinean naturalists have with each other, rather than the differences that Quinean naturalists have with intuitionists and theists and postmodernists and phenomenologists, or even non-Quinean “naturalists” like Frank Jackson and David Chalmers.
Luke,
From my perspective, the idea that we do not have libertarian free will is too obvious to be interesting. If you want to claim that places me in a particular philosophical camp, fine, but that doesn’t mean they do the same sort of cognitive labor I do when I’m doing philosophy. I knew there wasn’t libertarian free will the instant I first considered the problem, at I think maybe age fourteen or thereabouts; if that made me a master philosopher, great, but to me it seems like the distance from there to being able to resolve the algorithms of the brain into their component parts was the interesting part of the journey.
(And Judea Pearl I have quite well acknowledged as an explicit shoulder to stand upon, but so far as I know he’s another case of an AI researcher coming in from outside and solving a problem where philosophers just spun their wheels because they didn’t think in algorithms.)
I did not put you in the Quinean camp merely because of your agreement about libertarian free will. I listed about a dozen close comparisons on matters that are highly controversial in mainstream philosophy. And I placed special emphasis on your eerily echo-ish defense of Quine’s naturalized epistemology, which is central to both your philosophy and his.
I agree with you about Judea Pearl coming from AI to solve problems on which philosophers had been mostly stalled for centuries. Like Dennett says, AI researchers are doing philosophy—and really good philosophy—without really knowing it. Except for Pearl, actually. He does know he’s doing philosophy, as becomes apparent in his book on causality, for example, where he is regularly citing the mainstream philosophical literature on the subject (alongside statistics and AI and so on).
Look, if someone came to me and said, “I’m great at LW-style philosophy, and the proof of this is, I can argue there’s no libertarian free will” I would reply “You have not yet done any difficult or worthwhile cognitive work.” It’s like saying you don’t believe in astrology. Well, great, and yes there’s lots of people who disagree with you about that, but there’s a difference between doing grade school arithmetic and doing calculus, and “There is no libertarian free will” is grade school arithmetic. It doesn’t interest me that this philosophical school agrees with me about that. It’s too simple and basic, and part of what I object to in philosophy is that they are still arguing about problems like this instead of moving onto real questions.
Eliezer,
I don’t get it. Your comment here doesn’t respond to anything I said in my previous comment. The first sentence of my previous comment is: “I did not put you in the Quinean camp merely because of your agreement about libertarian free will.”
I think Eliezer is suggesting that all the things you’ve mentioned that distinguish Quinean naturalists from other philosophers are similarly basic, and that “LW-style philosophy” takes (what turns out to be) Quinean naturalism as a starting point and then goes on to do things that no one working in mainstream philosophy has thought of.
In other words, that the problem with mainstream philosophy isn’t that it’s all wrong, but that much of it is wrong and that the part that isn’t wrong is mostly not doing anything interesting with its not-wrongness.
(I make no comment on whether all, or some, or none, of that is correct. I’m just hoping to reduce the amount of talking-past-one-another here.)
Eliezer is suggesting that the Quineans are “not doing anything interesting with [their] not-wrongness” after being aware of the field for all of an hour and a half?!
Makes perfect sense to me. Someone comes up to me and says “This person is a brilliant mathematician! She just showed me a proof that there’s no highest prime, and proved Pythagoras’ theorem!” my response would be that that’s still no evidence that she’s made any worthwhile contribution to mathematics. She may have, but there’s little reason to believe it from the original statement.
Seems to me less like that and more like, “this Euclid fellow was brilliant”, followed by a list of things that Euclid proved before anybody else proved. Timing matters here. It’s no coincidence that before Quine came along, the clever Eliezers were not taking Quinean naturalism for granted.
For another analogy, if someone came along and told you, “this Hugh Everett fellow was brilliant! Here, read this paper in which he argues that the wave function never collapses”, would you say, “well, Eliezer already went through that a few years ago; there’s still no evidence that Everett made any worthwhile contribution”?
“before Quine came along, the clever Eliezers were not taking Quinean naturalism for granted.”
Citation needed.
I did not come to this conclusion on the basis of having read the claim somewhere. Rather, it’s what I gather from having read philosophy from both before and after Quine. If clever men were coming up with Quinean insight left and right before Quine appeared, then we should see a large number of philosophers pre-Quine who make Quine redundant. I don’t recall encountering any of these philosophers whose existence would virtually be assured if I were wrong. But suppose that I am simply ignorant. We still have Quine’s reputation to content with, the wide acknowledgment by major philosophers that he was highly influential. If I were wrong, he should have been lost in a sea of bright young men who anticipated his key insights.
“If clever men were coming up with Quinean insight left and right before Quine appeared, then we should see a large number of philosophers pre-Quine who make Quine redundant.”
Assuming also that those ‘clever men’ were going into philosophy rather than dismissing it as Eliezer has.
Eliezer may say that he dismisses philosophy, but he has nevertheless published a great deal which takes issue with some philosophy, agrees with other philosophy, and most importantly, he has provided a great deal of argumentation in favor of these conclusions which some philosophers agree with and other philosophers disagree with. Whether he believes it or not, Eliezer is doing philosophy, and a lot of it.
So, where are these clever men pre-Qune who dismissed philosophy and then proceeded, as Eliezer has done, to produce reams of it?
There are a few, e.g. E.T. Jaynes, Alfred North Whitehead (“Philosophy begins in wonder. And, at the end, when philosophic thought has done its best, the wonder remains. ”), and Richard Feynman (over and over and over again.)
More generally, though, those ‘clever men’ have tended to ignore philosophy and charge ahead with whatever they’re doing; it’s just that Eliezer’s work has tended to impinge more on philosophy than, say, themodynamics experiments or calculus proofs.
ETA: This didn’t actually address what Constant meant; my apologies.
Well, you did answer the question I asked, so it’s my fault that I didn’t word the question right. It’s practically a philosophical tradition to bury philosophy and then do philosophy on the grave of philosophy. For example the positivists sought to bury metaphysics. The king is dead, long live the king. So, sure, there are many examples of that.
The issue I was interested in was not this, but was whether it is probable that Eliezer independently reproduced Quine’s philosophy. I did not think it was likely. Certain of our ideas really do arise spontaneously among the clever generation after generation, but other ideas do not but are discovered rarely, at which point the ideas may be widely disseminated. I don’t number Quine’s ideas as among those that arise spontaneously, but among those that are rarely discovered and then may be widely disseminated. My evidence for this was Quine’s seeming originality. In response, it was argued that until Quine, the discoverers went on to do something else, which is why it wasn’t until Quine that the ideas were brought to the attention of philosophers. I argued in response that at least some fraction should, like Eliezer, have written about it, and then I asked, so where are these pre-Quine Quines who wrote about it? Only, I worded the question badly, and instead asked, where are the philosophers who dismissed philosophy. Of which there are, of course, many.
It’s hard to trace those causal lines, but here’s one data point: Dennett’s ideas have spread rather widely, and Dennett is an enthusiastic Quinean naturalist, and indeed was a student of Quine. Here’s Dennett:
Also: Stich, who might be called the ‘founder’ of experimental philosophy, was also an enthusiastic student of Quine’s. And experimental philosophy is the kind of philosophy getting all the major press in the last 10 years, it seems to me.
“still no evidence” is very much different to claims that certain properties do not exist in a given body of work. Absence of evidence (after an hour’s looking, if that) is not evidence of absence.
“Absence of evidence (after an hour’s looking, if that) is not evidence of absence.” Actually it is. Weak evidence, but evidence nonetheless.
More to the point, if someone makes a claim that a work belonging to reference class X has a property Y, and then presents no evidence that it has that property, and you’ve previously investigated many other members of class X and found them all to have the property not-Y, it’s reasonable to assume that the new work has not-Yness until given evidence otherwise.
If someone comes along and says “this unqualified person on the internet has found a proof that the Second Law of Thermodynamics is wrong! I know all other unqualified people on the internet who’ve said that have been wrong, but I’m going to claim this one is correct, without giving you any evidence for that”, you’d be absolutely reasonable just to say “they’re wrong” without bothering to check.
It appears that Eliezer has come to the conclusion, based on the academic philosophy he’s read, that the reference class “academic philosophers” and the reference class “random nut on the internet” have several properties in common. He may or may not be correct in this conclusion (I’ve read little academic philosophy and wouldn’t want to judge) but his reactions given that premise seem perfectly sensible to me.
I think it’s absurd to equate the claim “Certain philosophers can have ideas useful to LessWrong” with “this unqualified person on the internet has found a proof that the Second Law of Thermodynamics is wrong”, and the fact that you’re framing it as such indicates that you are highly motivated in your argumentation.
As for the hypothetical premise that “the reference class “academic philosophers” and the reference class “random nut on the internet” have several properties in common”, I invite you to look to the top right of this website for the endorsement (of and by) the Future of Humanity institute that does, you guessed it, academic philosophy. Also refer to the numerous occurrences throughout this website of top contributors citing the FHI as a valid outlet for efficient donations towards existential risk mitigation. Is LessWrong suggesting we donate to people with the credibility of ‘random nuts on the internet’? Or is there perhaps some inconsistency which is what the people all over this thread are pointing out?
I actually have no great feelings about the argument either way. I’m using that as an example of a case where given a sufficiently strong prior you would accept Eliezer’s reasoning. I’m also suggesting that Eliezer appears to have that sufficiently strong prior.
Please note that I made no claims about my own thoughts on academic philosophy, and specifically stated that I don’t share that hypothetical premise. But from Eliezer’s own statements, it appears that he does have that pre-existing view of philosophers. And given that he has already formed that view he is being perfectly reasonable in not bothering to change that view without sufficiently strong evidence.
So what you’re actually saying is that given an arbitrary premise held arbitrarily strongly, one can rationally reject an arbitrary amount of evidence. I guess this is true, if trivially so.
What I think you’ve missed is that the premise is not shielded from discussion and can be itself judged, especially on this website which rejects theism for the exact reason of starting from an arbitrary premise.
(I haven’t downvoted you by the way)
Yes, I am saying that. However, I’m also saying that from what Eliezer has said, I don’t think his view of academic philosophy is an arbitrary one, but one formed from reading a reasonable amount of philosophy. Nor do I think the amount of evidence that’s been presented is arbitrary—it certainly doesn’t, by itself, convince me that this group of people have much to say, and I’m starting out from a neutral position, not a negative one.
I affirm this interpretation.
Eliezer’s response does not. It looks like the response of one who feels their baby, LW style philosophy, is under attack. But it isn’t.
Methinks Eliezer needs to spend more time practicing the virtues of scholarship by actually reading much of the philosophy that he is critiquing. His assessments of “naturalistic” philosophy seem like straw men. Furthermore, from a psychological perspective, it seems like Eliezer is trying to defend his previously made commitments to “LW-Style philosophy” at all costs. This is not the mark of true rationality—true rationality admits challenges to previous assumptions.
Okay, so what have they done that I would consider cognitive philosophy? It doesn’t matter how many verbal-type non-dissolved questions we agree on apart from that. I’m taking free will as an exemplar and saying, “But it’s all like that, so far as I’ve been able to tell.”
I’m not sure what you mean by this. Are you saying that my claim that LW-style philosophy shares many central assumptions with Quinean naturalism in contrast to most of philosophy doesn’t hinge on whether or not I can present a long list of things on which LW-style philosophy and Quinean naturalism agree on, in contrast to most of philosophy?
I suspect that’s not what you’re saying, but then… what do you think it was that I was claiming in the first place?
Or, another way to put it: Which sentence of my original article are you disagreeing with? Do you disagree with my claim that “standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century”? Or perhaps you disagree with my claim that “Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science—a movement that has been active for at least two decades”? Or perhaps you disagree with my claim that “Rationalists need not dismiss or avoid philosophy”?
I wonder if you agree with gjm’s suggestion that “LW-style philosophy takes (what turns out to be) Quinean naturalism as a starting point and then goes on to do things that no one working in mainstream philosophy has thought of.” That’s roughly what I said above, though of course I’ll point out that lots of Quinean naturalists have taken Quinean naturalism as a starting point and done things that nobody else thought of. That’s just what it means to make original contributions in the movement.
I’ll be happy to provide examples of “cognitive philosophy” once I’ve got this above bit cleared up. I’ve given examples before (Schroeder 2004; Bishop & Trout 2004; Bickle 2003), but of course I could give more detail.
I’m saying that the claim that LW-style philosophy shares many assumptions with Quinean naturalism in contrast to most of philosophy is unimportant, thus, presenting the long list of basic assumptions on which LW-style and Quinean naturalism agree is from my perspective irrelevant.
Yes. What I would consider “standard LW positions” is not “there is no libertarian free will” but rather “the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z”. If the latter has been a standard position then I would be quite interested.
The kind of reforms you quote are extremely basic, along the lines of “OMG there are cognitive biases and they affect philosophers!” not “This is how this specific algorithm generates the following philosophical debate...” If the movement hasn’t progressed to the second stage, then there seems little point in aspiring LW rationalists reading about it.
GJM’s suggestion is correct but the thing which you seem to deny and which I think is true is that LW is at a different stage of doing this sort of philosophy than any Quinean naturalism I have heard of, so that the other Quineans “doing things that nobody else have thought of” don’t seem to be doing commensurate work.
I am not asking for an example of someone who agrees with me that, sure, cognitive philosophy sounds like a great idea, by golly. There’s a difference between saying “Sure, evolution is true!” and doing evolutionary biology.
I’m asking for someone who’s dissolved a philosophical question into a cognitive algorithm, preferably in a way not previously seen before on LW.
Did you read the LW sequence on free will, both the setup and the solution? Apologies if you’ve already previously answered this question, I have a vague feeling that I asked you before and you said yes, but still, just checking.
On the whole, you seem to think that I should be really enthusiastic about finding philosophers who agree with my basic assumptions, because here are these possible valuable allies in academia—why, if we could reframe LW as Quineanism, we’d have a whole support base ready-made!
Whereas I’m thinking, “If you ask what sort of activity these people perform in their daily work, their skills are similar to those of other philosophers and unlike those of people trying to figure out what algorithm a brain is running” and so they can’t be hired to do the sort of work we need without extensive retraining; and since we’re not out to reform academic philosophy, per se, it’s not clear that we need allies in a fight we could just bypass.
Well, it’s important to my claim that LW-style philosophy fits into the category of Quinean naturalism, which I think is undeniable. You may think Quinean naturalism is obvious, but well… that’s what makes you a Quinean naturalist. Part of the purpose of my post is to place LW-style philosophy in the context of mainstream philosophy, and my list of shared assumptions between LW-style philosophy and Quinean philosophy does just that. That goal by itself wasn’t meant to be very important. But I think it’s a categorization that cuts reality near enough the joints to be useful.
Then we are using the word “standard” in different ways. If I were to ask most people to list some “standard LW positions”, I’m pretty sure they would list things like reductionism, empiricism, the rejection of libertarian free will, atheism, the centrality of cognitive science to epistemology, and so on—long before they list anything like “the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z”. I’m not even sure how much consensus that enjoys on Less Wrong. I doubt it is as much a ‘standard’ position on Less Wrong than the other things I mentioned.
But I’m not here to argue about the meaning of the word standard.
Disagreement: dissolved.
Moving on: Yes, I read the free will stuff. ‘How an Algorithm Feels from the Inside’ is one of my all-time favorite Yudkowsky posts.
I’ll have to be more clear on what you think LW is doing that Quinean naturalists are not doing. But really, I don’t even need to wait for that to respond. Even work by philosophers who are not Quinean naturalists can be useful in your very particular line of work—for example in clearing up your CEV article’s conflation of “extrapolating” from means to ends and “extrapolating” from current ends to new ends after reflective equilibrium and other processes have taken place.
Finally, you say that if Quinean naturalism hasn’t progressed from recognizing that biases affect philosophers to showing how a specific algorithm generates a philosophical debate then “there seems little point in aspiring LW rationalists reading about it.”
This claim is, I think, both clearly false as stated and misrepresents the state of Quinean naturalism.
First, on falsity: There are many other useful things for philosophers (including Quinean naturalists) to be doing besides just working with scientists to figure out why our brains produce confused philosophical debates. Since your own philosophical work on Less Wrong has considered far more than just this, I assume you agree. Thus, it is not the case that Quinean naturalists aren’t doing useful work unless they are discovering the cognitive algorithms that generate philosophical debates.
Second, on misrepresentation: Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it. Moreover, Quinean naturalists do sometimes discuss how cognitive algorithms generate philosophical debates. See, for example, Eric Schwitzgebel’s recent work on how introspection works and why it generates philosophical confusions.
It seems you’re not just resisting the classification of LW-style philosophy within the broader category of Quinean naturalism. You’re also resisting the whole idea of seeing value in what mainstream naturalistic philosophers are doing, which I don’t get. How do you think that thought got generated? Reading too much modal logic and not enough Dennett / Bickle / Bishop / Metzinger / Lokhorst / Thagard?
I’m not even trying to say that Eliezer Yudkowsky should read more naturalistic philosophy. I suspect that’s not the best use of your time, especially given your strong aversion to it. But I am saying that the mainstream community has useful insights and clarifications and progress to contribute. You’ve already drawn heavily from the basic insights of Quinean naturalism, whether or not you got them from Quine himself. And you’ve drawn from some of the more advanced insights of people like Judea Pearl and Nick Bostrom.
So I guess I just don’t get what looks to me like a strong aversion in you to rationalists looking through Quinean naturalistic philosophy for useful insights. I don’t understand where that aversion is coming from. If you’re not that familiar with Quinean naturalistic philosophy, why do you assume in advance that it’s a bad idea to read through it for insights?
I’m reminded of the “subsequence” of The Level Above Mine, Competent Elites, Above Average AI Scientists, and That Magical Click.
Maybe mainstream philosophers just lack the aura of thousand-year-old rationalist vampires?
I’m quite sure they do. Right now I can’t think of a philosopher who is as imposing to me as (the late) E.T. Jaynes is. Unless you count people like Judea Pearl who also do AI research, that is. :)
But that doesn’t mean that mainstream philosophers never make useful and original contributions on all kinds of subjects relevant to Less Wrong and even to friendly AI.
That (Jaynes) is a pretty high standard. But not impossibly high. As candidates, I would mention Jaakko Hintikka, Per Martin-Lof and the late David Lewis. If you are allowed to count economists, then I would also mention game theorists like Aumann, Binmore, and the late John Harsanyi. And if you allow philosophically inclined physicists like Jaynes, there are quite a few folks worth mentioning.
I’d never heard of Per Martin-Lof, thanks.
I of course am not definitive here, but I strongly suspect that from EY’s perspective it means precisely that.
If so, I don’t think he can maintain that position consistently, since he has already benefited from the work of many mainstream philosophers, and continues to do so—for example Bostrom on anthropic reasoning.
Maybe. But they have a self-deprecating sense of humor. Doesn’t that count for something?
Actually an expectation that studying this philosophy stuff would be of no use (or even can harm you), which is a more reflectively reliable judgment than mere emotional aversion. Might be incorrect, but can’t be influenced by arguing that aversion is irrelevant (not that you do argue this way, but summarizing the position with use of that word suggests doing that).
Thanks for the link to Eric Schwitzgebel; very interesting reading!
Because I expect it to teach very bad habits of thought that will lead people to be unable to do real work. Assume naturalism! Move on! NEXT!
Yes, that’s what most Quinean naturalists are doing...
Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism? I hope so. I’m also still curious to hear your response to the specific example I’ve now given several times of how even non-naturalistic philosophy can provide useful insights that bear directly on your work on Friendly AI (the “extrapolation” bit).
As for expecting naturalistic philosophy to teach very bad habits of thought: That has some plausibility. But it is hard to argue about with any precision. What’s the cost/benefit analysis on reading naturalistic philosophy after having undergone significant LW-rationality training? I don’t know.
But I will point out that reading naturalistic philosophy (1) deconverted me from fundamentalist Christianity, (2) led me to reject most of standard analytic philosophy, (3) led me to almost all of the “standard” (in the sense I intended above) LW positions, and (4) got me reading and loving Epistemology and the Psychology of Human Judgment and Good and Real (two philosophy books that could just as well be a series of Less Wrong blog posts) - all before I started regularly reading Less Wrong.
So… it’s not always bad. :)
Also, I suspect your recommendation to not read naturalistic, reductionistic philosophy outside of Less Wrong feels very paternalistic and cultish to me, and I have a negative emotional (and perhaps rational) reaction to the suggestion that people should only get their philosophy from a single community.
Reply to charge that it is clearly false: Sorry, it doesn’t look clearly false to me. It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.
Reply to charge that it misrepresented Quinean naturalism: Give me an example of one philosophical question they dissolved into a cognitive algorithm. Please don’t link to a book on Amazon where I click “Surprise me” ten times looking for a dissolution and then give up. Just tell me the question and sketch the algorithm.
The CEV article’s “conflation” is not a convincing example. I was talking about the distinction between terminal and instrumental value way back in 2001, though I made the then-usual error of using nonstandard terminology. I left that distinction out of CEV specifically because (a) I’d seen it generate cognitive errors in people who immediately went funny in the head as soon as they were introduced to the concept of top-level values, and (b) because the original CEV paper wasn’t supposed to go down to the level of detail of ordering expected-consequence updates versus moral-argument-processing updates.
Thanks for your reply.
On whether people can benefit from reading philosophy outside of Less Wrong and AI books, we simply disagree.
Your response on misrepresenting Quinean naturalism did not reply to this part: “Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it.”
As for an example of dissolving certain questions into cognitive algorithms, I’m drafting up a post on that right now. (Actually, the current post was written as a dependency for the other post I’m writing.)
On CEV and extrapolation: You seem to agree that the distinction is useful, because you’ve used it yourself elsewhere (you just weren’t going into so much detail in the CEV paper). But that seems to undermine your point that valuable insights are not to be found in mainstream philosophy. Or, maybe that’s not your claim. Maybe your claim is that all the valuable insights of mainstream philosophy happen to have already shown up on Less Wrong and in AI textbooks. Either way, I once again simply disagree.
I doubt that you picked up all the useful philosophy you have put on Less Wrong exclusively from AI books.
I agree about philosophy and actually I feel similar about the LW style rationality, for my value of real work (engineering mostly, with some art and science). Your tricks burden the tree search, and also easily lead to wrong order of branch processing as the ‘biases’ for effective branch processing are either disabled or worst of all negated, before a substitute is devised.
If you want to form a belief about, for example, FAI, it’s all nice that you don’t feel that the morality can result from some simple principles. If you want to build FAI—this branch (the generated morality that we agree with) is much much lower while it’s probability of success, really, isn’t that much worse, as the long, hand wavy argument has many points of possible failure and low reliability. Then, there’s still no immunity against fallacies. The worst form of sunk cost fallacy is disregard for possibility of better solution after the cost has been sunk. That’s what destroys corporations after they sink costs. They don’t even pursue cost-recovery option when it doesn’t coincide with prior effort and only utilizes part of prior effort.
Perhaps. But it is difficult to imagine any less complete problem dissolution being successful at actually shutting down that confused philosophical debate, and thus freeing those first-class minds to actually do those hypothetical useful things.
BTW, by “more” I meant “additional”: I meant that there “are many other useful things for philosophers… to be doing...” I’ve now clarified the wording in the original comment.
It might be useful, if only for gaining status and attention and funding, to connect your work directly to one or several academic fields. To present it as a synthesis of philosophy, computer science, and cognitive science (or some other combination of your choice.) When people ask me what LessWrong is, I generally say something like “It’s philosophy from a computer scientist’s perspective.” Most people can only put a mental label on something when they have a rough idea of what it’s like, and it’s not practical to say, “Well, our work isn’t like anything.”
That doesn’t mean you have to hire philosophers or join a philosophy department; it might not mean that you, personally, have to do anything. But I do think that more people would be interested, and have a smaller inferential distance, if LW ideas were generally presented as related to other disciplines.
Expanding on this, which section of my local Barnes And Noble is your (Eliezer) book going to be in? Philosophy seems like the best fit (aside from the best selling non-fiction) to get new interested readership.
Amazon’s “Books > Nonfiction > Social Sciences” contains things like Malcolm Gladwell and Predictably Irrational, which I think is the audience that Eliezer is targeting.
Just taking the example I happen to know about, Sarah-Jane Leslie works on the meaning of generics. (What do we mean when we say “Tigers have stripes” ? All tigers? Most tigers? Normal tigers? But then how do we account for true statements like “Tigers eat people” when most tigers don’t eat people, or “Peacocks have colorful tails” when female peacocks don’t have colorful tails?) She answers this question directly using evidence from cognitive science. I think it counts as question-dissolving.
When I read your first post here, my mind immediately went to You’re Entitled to Arguments, But Not (That Particular) Proof. I gave you you the benefit of the doubt since you called it a ‘litmus test’ (however arbitrary), but you seem to have anchored on that. If your work is in substantial agreement with an established field in philosophy, that means there are more intelligent people who could become allies, and a store of knowledge from where valuable insights could come. I don’t know why you are looking this particular gift horse in the mouth.
There’s lots of people I think have valuable insights—cognitive scientists, AI researchers, statistical learning experts, mathematicians...
The question is whether high-grade academic philosophy belongs on the scholarship list, not whether scholarship is a virtue. The fact that they have managed to produce a minority school that agrees with Gary Drescher on the extremely basic question of whether there’s libertarian free will (no) and people are made of atoms (yes), does not entitle them to a position next to “Artificial Intelligence: A Modern Approach”.
Physicalism and the rejection of free will are both majority positions in Anglophone philosophy, actually, but I agree that agreement on those points doesn’t put someone on the shelf next to AIMA.
Regarding physicalism, I don’t entirely trust that survey.
Firstly, most of those who call themselves physicalists nevertheless think that qualia exist and are Deeply Mysterious, such that one cannot deduce a priori, from objective physical facts, that Alfred isn’t a zombie or that Alfred and Bob aren’t qualia-inverted with respect to each other.
Secondly, in very recent years − 90s into the new century—I think there’s been a rising tide of antimaterialism. Erstwhile physicalists such as Jaegwon Kim have defected. Anthologies are published with names like “The Waning of Materialism”.
As the survey itself tell us, only 16% accept or lean towards “zombies are inconceivable”.
This is all consistent with my experience in internet debates, where it seems that most upcoming or wannabe philosophers who have any confident opinions on the matter are antimaterialists.
All good points. I take back the claim that physicalism is a majority position; that is under serious doubt.
How sad! :(
[...]
Strictly speaking, I don’t think either of these requires abandonment of physicalism by even a small degree. To say that one can or cannot conceive something is not to directly say anything about reality itself (#). To say that one can or cannot deduce something is, again, not directly to say anything about reality itself (#except in the trivial sense that it says something about what one, i.e. a real person, can or cannot deduce, or can or cannot conceive). Even if you want to argue that it says something about reality itself, however indirectly, it’s not at all obvious that it says this particular thing (i.e. non-physicalism).
In particular, I am well aware of the severe limitations of deduction as a path to knowledge. Being so aware, I am not in the slightest surprised by, or troubled by, the inability to deduce that Alfred isn’t a zombie. I don’t see why I should be troubled. As for what I can conceive—well, I can conceive all sorts of things which have no obvious connection to reality. Why should examination of the limits of my imagination give me any sort of information about whether physicalism is true?
The key question for me is: is the hypothesis of physicalism tenable? I’m not asking for proof, deductive or otherwise. I am asking whether the hypothesis is consistent with the evidence and internally coherent. The fact that someone can conceive of zombies, and therefore conceive that the hypothesis is false, is no disproof of the hypothesis. And similarly, the fact that the hypothesis of physicalism cannot be deduced is no disproof either.
I
Possibly you should state your hypothesis ahead of time and define what would count (or have counted in the past) as a worthwhile contribution to LW-style rationalism from within the analytic philosophy community.
Then we would have a concrete way to decide the question of whether analytic philosophy has contributed anything in the past, or contributes anything in the future.
It also might turn out in the process of formalising your definition of what counts as a worthwhile contribution that nothing outside of your specific field of AI research counts for you, which would in itself be a worthwhile realisation.
Acknowledging my own biases here, I’m an analytic philosopher who mostly teaches scientific methodology and ethics (with a minor side interest in statistics) and my reaction to perusing the LW content was that there were some very interesting and valuable nuggets here for me to fossick out but that the bulk of the content wasn’t new or non-obvious to me.
Possibly there is so little for you in philosophy that has real novelty or value because there is already enormous overlap between what you do and what is done in the relevant subset of philosophy.
Being a philosopher makes you acutely aware of how deep the intellectual debts of most modern people are to philosophy, and how little awareness of this they have. It’s all too easy to believe that one came to one’s moral viewpoint entirely unassisted and entirely naturally, for example, without being aware that one is actually articulating a mixture of Kant and Bentham’s ideas that you never would have come up with had you lived before Kant and Bentham. Many people who have never heard of Peter Singer take the animal liberation movement for granted, unaware that the term “animal liberation” was coined by a philosopher in 1975 drawing on previous work by philosophers in the 1970s.
Hello? Robert Kane?
Dennett is one of the leaders of mainstream philosophy. If it’s in Dennett, Luke wins.
How did you acquire your beliefs about what standard “naturalistic” philosophy says? I have this impression that it was from outside caricatures rather than philosophers themselves.
Remember Scott Aaronson’s critique of Stephen Wolfram? You seem at risk of being in a similar position with respect to mainstream analytic philosophy as Wolfram was with respect to mainstream science.
A partial answer here:
I have always been too shy to ask, but would anyone be willing to tell me how wrong I am about my musings regarding free will here? I haven’t read the LW sequence on free will yet, as it states “aspiring reductionists should try to solve it on their own.” I tried, any feedback?
I don’t think it’s very good. (On the other hand, I have seen a great deal worse on free will.) There seem to be some outright errors or at least imprecisions, eg.:
To keep on topic, are you familiar with quining and all the ways of self-referencing?
I am vaguely aware of it. As far as I know a Quine can be seen as an artifact of a given language rather than a complete and consistent self-reference. Every Quine is missing some of its own definition, e.g. “when preceded by” or “print” need external interpreters to work as intended. No closed system can contain a perfect model of itself and is consequently unable to predict its actions, therefore no libertarian free will can exist.
What is outright wrong or imprecise about it?
The main point I tried to make is that a definition of free will that does satisfy our understanding of being free agents is possible if you disregard free from and concentrate on free to.
That’s good for standard philosophy, but it doesn’t rise to the level of LW-style cognitive philosophy.
The “What an Algorithm..” dissolution of FW seemed old hat to me.
Discussions of priority are boring. If Quinean naturalism has insights relevant to LW, let’s hear them!
What I’m saying is that Less Wrong shouldn’t ignore mainstream philosophy.
What I demonstrated above is that, directly or indirectly, Less Wrong has already drawn heavily from mainstream philosophy. It would be odd to suggest that the progress in mainstream philosophy that Less Wrong has already made use of would suddenly stop, justifying a choice to ignore mainstream philosophy in the future.
As for naturalistic philosophy’s insights relevant to LW, they are forthcoming. I’ll be writing some more philosophical posts in the future.
And actually, my statistical prediction rules post came mostly from me reading a philosophy book (Epistemology and the Psychology of Human Judgment), not from reading psychology books.
I’ll await your next post, but in retrospect you should have started with the big concrete example of mainstream philosophy doing an LW-style dissolution-to-algorithm not already covered on LW, and then told us that the moral was that we shouldn’t ignore mainstream philosophy.
I did the whole sequence on QM to make the final point that people shouldn’t trust physicists to get elementary Bayesian problems right. I didn’t just walk in and tell them that physicists were untrustworthy.
If you want to make a point about medicine, you start by showing people a Bayesian problem that doctors get wrong; you don’t start by telling them that doctors are untrustworthy.
If you want me to believe that philosophy isn’t a terribly sick field, devoted to arguing instead of facing real-world tests and admiring problems instead of solving them and moving on, whose poison a novice should avoid in favor of eating healthy fields like settled physics (not string theory) or mainstream AI (not AGI), you’re probably better off starting with the specific example first. “I disagree with your decision not to cover terminal vs. instrumental in CEV” doesn’t cover it, and neither does “Quineans agree the world is made of atoms”. Show me this field’s power!
Eliezer,
When I wrote the post I didn’t know that what you meant by “reductionist-grade naturalistic cognitive philosophy” was only the very narrow thing of dissolving philosophical problems to cognitive algorithms. After all, most of the useful philosophy you’ve done on Less Wrong is not specifically related to that very particular thing… which again supports my point that mainstream philosophy has more to offer than dissolution-to-algorithm. (Unless you think most of your philosophical writing on Less Wrong is useless.)
Also, I don’t disagree with your decision not to cover means and ends in CEV.
Anyway. Here are some useful contributions of mainstream philosophy:
Quine’s naturalized epistemology. Epistemology is a branch of cognitive science: that’s where recursive justification hits bottom, in the lens that sees its flaws.
Tarski on language and truth. One of Tarski’s papers on truth recently ranked as the 4th most important philosophy paper of the 20th century by a survey of philosophers. Philosophers have much developed Tarski’s account since then, of course.
Chalmers’ formalization of Good’s intelligence explosion argument. Good’s 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good’s argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good’s intelligence explosion than anybody at SIAI has.
Dennett on belief in belief. Used regularly on Less Wrong.
Bratman on intention. Bratman’s 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior. See, for example, pages 60-61 and 1041 of AIMA (3rd ed.).
Functionalism and multiple realizability. The philosophy of mind most natural to AI was introduced and developed by Putnam and Lewis in the 1960s, and more recently by Dennett.
Explaining the cognitive processes that generate our intuitions. Both Shafir (1998) and Talbot (2009) summarize and discuss as much as cognitive scientists know about the cognitive mechanisms that produce our intuitions, and use that data to explore which few intuitions might be trusted and which ones cannot—a conclusion that of course dissolves many philosophical problems generated from conflicts between intuitions. (This is the post I’m drafting, BTW.) Talbot describes the project of his philosophy dissertation for USC this way: ”...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy. This has the potential to resolve some problems due to conflicting intuitions, since some of the conflicting intuitions may be shown to be unreliable and not to be taken seriously; it also has the potential to free some domains of philosophy from the burden of having to conform to our intuitions, a burden that has been too heavy to bear in many cases...” Sound familiar?
Pearl on causality. You acknowledge the breakthrough. While you’re right that this is mostly a case of an AI researcher coming in from the outside to solve philosophical problems, Pearl did indeed make use of the existing research in mainstream philosophy (and AI, and statistics) in his book on causality.
Drescher’s Good and Real. You’ve praised this book as well, which is the result of Drescher’s studies under Dan Dennett at Tufts. And the final chapter is a formal defense of something like Kant’s categorical imperative.
Dennett’s “intentional stance.” A useful concept in many contexts, for example here.
Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal’s mugging. And the doomsday argument. And the simulation argument.
Ord on risks with low probabilities and high stakes. Here.
Deontic logic. The logic of actions that are permissible, forbidden, obligatory, etc. Not your approach to FAI, but will be useful in constraining the behavior of partially autonomous machines prior to superintelligence, for example in the world’s first battlefield robots.
Reflective equilibrium. Reflective equilibrium is used in CEV. It was first articulated by Goodman (1965), then by Rawls (1971), and in more detail by Daniels (1996). See also the more computational discussion in Thagard (1988), ch. 7.
Experimental philosophy on the biases that infect our moral judgments. Experimental philosophers are now doing Kahneman & Tversky -ish work specific to biases that infect our moral judgments. Knobe, Nichols, Haidt, etc. See an overview in Experiments in Ethics.
Greene’s work on moral judgment. Joshua Greene is a philosopher and neuroscientist at Harvard whose work using brain scanners and trolley problems (since 2001) is quite literally decoding the algorithms we use to arrive at moral judgments, and helping to dissolve the debate between deontologists and utilitarians (in his view, in favor of utilitarianism).
Dennett’s Freedom Evolves. The entire book is devoted to explaining the evolutionary processes that produced the cognitive algorithms that produce the experience of free will and the actual kind of free will we do have.
Quinean naturalists showing intuitionist philosophers that they are full of shit. See for example, Schwitzgebel and Cushman demonstrating experimentally that moral philosophers have no special expertise in avoiding known biases. This is the kind of thing that brings people around to accepting those very basic starting points of Quinean naturalism as a first step toward doing useful work in philosophy.
Bishop & Trout on ameliorative psychology. Much of Less Wrong’s writing is about how to use our awareness of cognitive biases to make better decisions and have a higher proportion of beliefs that are true. That is the exact subject of Bishop & Trout (2004), which they call “ameliorative psychology.” The book reads like a long sequence of Less Wrong posts, and was the main source of my post on statistical prediction rules, which many people found valuable. And it came about two years before the first Eliezer post on Overcoming Bias. If you think that isn’t useful stuff coming from mainstream philosophy, then you’re saying a huge chunk of Less Wrong isn’t useful.
Talbot on intuitionism about consciousness. Talbot (here) argues that intuitionist arguments about consciousness are illegitimate because of the cognitive process that produces them: “Recently, a number of philosophers have turned to folk intuitions about mental states for data about whether or not humans have qualia or phenomenal consciousness. [But] this is inappropriate. Folk judgments studied by these researchers are mostly likely generated by a certain cognitive system—System One—that will ignore qualia when making these judgments, even if qualia exist.”
“The mechanism behind Gettier intuitions.” This upcoming project of the Boulder philosophy department aims to unravel a central (misguided) topic of 20th century epistemology by examining the cognitive mechanisms that produce the debate. Dissolution to algorithm yet again. They have other similar projects ongoing, too.
Computational meta-ethics. I don’t know if Lokhorst’s paper in particular is useful to you, but I suspect that kind of thing will be, and Lokhorst’s paper is only the beginning. Lokhorst is trying to implement a meta-ethical system computationally, and then actually testing what the results are.
Of course that’s far from all there is, but it’s a start.
...also, you occasionally stumble across some neato quotes, like Dennett saying “AI makes philosophy honest.” :)
Note that useful insights come from unexpected places. Rawls was not a Quinean naturalist, but his concept of reflective equilibrium plays a central role in your plan for Friendly AI to save the world.
P.S. Predicate logic was removed from the original list for these reasons.
Saying this may count as staking an exciting position in philosophy, already right there; but merely saying this doesn’t shape my expectations about how people think, or tell me how to build an AI, or how to expect or do anything concrete that I couldn’t do before, so from an LW perspective this isn’t yet a move on the gameboard. At best it introduces a move on the gameboard.
I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician. Perhaps you can learn about him in philosophy, but that doesn’t imply people should study philosophy if they will also run into Tarski by doing mathematics.
...was great for introducing mainstream academia to Good, but if you compare it to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate then you’ll see that most of the issues raised didn’t fit into Chalmers’s decomposition at all. Not suggesting that he should’ve done it differently in a first paper, but still, Chalmers’s formalization doesn’t yet represent most of the debates that have been done in this community. It’s more an illustration of how far you have to simplify things down for the sake of getting published in the mainstream, than an argument that you ought to be learning this sort of thing from the mainstream.
Acknowledged and credited. Like Drescher, Dennett is one of the known exceptions.
Appears as a citation only in AIMA 2nd edition, described as a philosopher who approves of GOFAI. “Not all philosophers are critical of GOFAI, however; some are, in fact, ardent advocates and even practitioners… Michael Bratman has applied his “belief-desire-intention” model of human psychology (Bratman, 1987) to AI research on planning (Bratman, 1992).” This is the only mention in the 2nd edition. Perhaps by the time they wrote the third edition they read more Bratman and figured that he could be used to describe work they had already done? Not exactly a “major inspiration”, if so...
This comes under the heading of “things that rather a lot of computer programmers, though not all of them, can see as immediately obvious even if philosophers argue it afterward”. I really don’t think that computer programmers would be at a loss to understand that different systems can implement the same algorithm if not for Putnam and Lewis.
Same comment as for Quine: This might introduce interesting work, but while saying just this may count as an exciting philosophical position, it’s not a move on the LW gameboard until you get to specifics. Then it’s not a very impressive move unless it involves doing nonobvious reductionism, not just “Bias X might make philosophers want to believe in position Y”. You are not being held to a special standard as Luke here; a friend named Kip Werking once did some work arguing that we have lots of cognitive biases pushing us to believe in libertarian free will that I thought made a nice illustration of the difference between LW-style decomposition of a cognitive algorithm and treating biases as an argument in the war of surface intuitions.
Mathematician and AI researcher. He may have mentioned the philosophical literature in his book. It’s what academics do. He may even have read the philosophers before he worked out the answer for himself. He may even have found that reading philosophers getting it wrong helped spur him to think about the problem and deduce the right answer by contrast—I’ve done some of that over the course of my career, though more in the early phases than the later phases. Can you really describe Pearl’s work as “building” on philosophy, when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation? Has Pearl named a previous philosopher, who was not a mathematician, who Pearl thought was getting it right?
Previously named by me as good philosophy, as done by an AI researcher coming in from outside for some odd reason. Not exactly a good sign for philosophy when you think about it.
For a change I actually did read about this before forming my own AI theories. I can’t recall ever actually using it, though. It’s for helping people who are confused in a way that I wasn’t confused to begin with. Dennett is in any case a widely known and named exception.
A friend and colleague who was part of the transhumanist community and a founder of the World Transhumanist Association long before he was the Director of the Oxford Future of Humanity Institute, and who’s done a great deal to precisionize transhumanist ideas about global catastrophic risks and inform academia about them, as well as excellent original work on anthropic reasoning and the simulation argument. Bostrom is familiar with Less Wrong and has even tried to bring some of the work done here into mainstream academia, such as Pascal’s Mugging, which was invented right here on Less Wrong by none other than yours truly—although of course, owing to the constraints of academia and their prior unfamiliarity with elementary probability theory and decision theory, Bostrom was unable to convey the most exciting part of Pascal’s Mugging in his academic writeup, namely the idea that Solomonoff-induction-style reasoning will explode the size of remote possibilities much faster than their Kolmogorov complexity diminishes their probability.
Reading Bostrom is a triumph of the rule “Read the most famous transhumanists” not “Read the most famous philosophers”.
The doomsday argument, which was not invented by Bostrom, is a rare case of genuinely interesting work done in mainstream philosophy—anthropic issues are genuinely not obvious, genuinely worth arguing about and philosophers have done genuinely interesting work on it. Similarly, although LW has gotten further, there has been genuinely interesting work in philosophy on the genuinely interesting problems of Newcomblike dilemmas. There are people in the field who can do good work on the rather rare occasions when there is something worth arguing about that is still classed as “philosophy” rather than as a separate science, although they cannot actually solve those problems (as very clearly illustrated by the Newcomblike case) and the field as a whole is not capable of distinguishing good work from bad work on even the genuinely interesting subjects.
Argued it on Less Wrong before he wrote the mainstream paper. The LW discussion got further, IMO. (And AFAIK, since I don’t know if there was any academic debate or if the paper just dropped into the void.)
Is not useful for anything in real life / AI. This is instantly obvious to any sufficiently competent AI researcher. See e.g. http://norvig.com/design-patterns/img070.htm, a mention that turned up in passing back when I was doing my own search for prior work on Friendly AI.
...I’ll stop there, but do want to note, even if it’s out-of-order, that the work you glowingly cite on statistical prediction rules is familiar to me from having read the famous edited volume “Judgment Under Uncertainty: Heuristics and Biases” where it appears as a lovely chapter by Robyn Dawes on “The robust beauty of improper linear models”, which quite stuck in my mind (citation from memory). You may have learned about this from philosophy, and I can see how you would credit that as a use of reading philosophy, but it’s not work done in philosophy and, well, I didn’t learn about it there so this particular citation feels a bit odd to me.
That this isn’t at all the case should be obvious even if the only thing you’ve read on the subject is Pearl’s book. The entire counterfactual approach is due to Lewis and Stalnaker. Salmon’s theory isn’t about correlation either. Also, see James Woodward who has done very similar work to Pearl but from a philosophy department. Pearl cites all of them if I recall.
Stalnaker’s name sounds familiar from Pearl, so I’ll take your word for this and concede the point.
Cool. Let me know when you’ve finished your comment here and I’ll respond.
Done.
Quine’s naturalized epistemology: agreed.
Tarski: But I thought you said you were not only influenced by Tarski’s mathematics but also his philosophical work on truth?
Chalmers’ paper: Yeah, it’s mostly useful as an overview. I should have clarified that I meant that Chalmers’ paper makes a more organized and compelling case for Good’s intelligence explosion than anybody at SIAI has in one place. Obviously, your work (and your debate with Robin) goes far beyond Chalmers’ introductory paper, but it’s scattered all over the place and takes a lot of reading to track down and understand.
And this would be the main reason to learn something from the mainstream: If it takes way less time than tracking down the same arguments and answers through hundreds of Less Wrong posts and other articles, and does a better job of pointing you to other discussions of the relevant ideas.
But we could have the best of both worlds if SIAI spent some time writing well-referenced survey articles on their work, in the professional style instead of telling people to read hundreds of pages of blog posts (that mostly lack references) in order to figure out what you’re talking about.
Bratman: I don’t know his influence first hand, either—it’s just that I’ve seen his 1987 book mentioned in several books on AI and cognitive science.
Pearl: Jack beat me to the punch on this.
Talbot: I guess I’ll have to read more about what you mean by dissolution to cognitive algorithm. I thought the point was that even if you can solve the problem, there’s that lingering wonder about why people believe in free will, and once you explain why it is that humans believe in free will, not even a hint of the problem remains. The difference being that your dissolution of free will to cognitive algorithm didn’t (as I recall) cite any of the relevant science, whereas Talbot’s (and others’) dissolutions to cognitive algorithms do cite the relevant science.
Is there somewhere where you explain the difference between what Talbot, and also Kip Werking, have done versus what you think is so special and important about LW-style philosophy?
As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn’t, and people are better off reading statistics and AI and cognitive science, like I said. So I’m not sure there’s anything left to argue.
The one major thing I’d like clarification on (if you can find the time) is the difference between what experimental philosophers are doing (or what Joshua Greene is doing) and the dissolution-to-algorithm that you consider so central to LW-style philosophy.
I’d like to emphasize, to no one in particular, that the evaluation that seems to be going on here is about whether or not reading these philosophers is useful for building a Friendly recursively self-improving artificial intelligence. While thats a good criteria for whether or not Eliezer should read them, failure to meet this criteria doesn’t render the work of the philosopher valueless (really! it doesn’t!). The question “is philosophy helpful for researching AI” is not the same as the question “is philosophy helpful for a rational person trying to better understand the world”.
Tarski did philosophical work on truth? Apart from his mathematical logic work on truth? Haven’t read it if so.
What does Talbot say about a cognitive algorithm generating the appearance of free will? Is it one of the cognitive algorithms referenced in the LW dissolution or a different one? Does Talbot talk about labeling possibilities as reachable? About causal models with separate nodes for self and physics? Can you please take a moment to be specific about this?
Okay, now you’re just drawing lines around what you don’t like and calling everything in that box philosophy.
Should we just hold a draft? With the first pick the philosophers select… Judea Pearl! What? whats that? The mathematicians have just grabbed Alfred Tarski from right under the noses the of the philosophers!
To philosophers, Tarski’s work on truth is considered one of the triumphs of 20th century philosophy. But that sort of thing is typical of analytic and especially naturalistic philosophy (including your own philosophy): the lines between mathematics and science and philosophy are pretty fuzzy.
Talbot’s paper isn’t about free will (though others in experimental philosophy are); it’s about the cognitive mechanisms that produce intuitions in general. But anyway this is the post I’m drafting right now, so I’ll be happy to pick up the conversation once I’ve posted it. I might do a post on experimental philosophy and free will, too.
Yet to Wikipedia, Tarski is a mathematician. Period. Philosophy is not mentioned.
It is true that mathematical logic can be considered as a joint construction by philosophers and mathematicians. Frege, Russell, and Godel are all listed in Wikipedia as both mathematicians and philosophers. So are a couple of modern contributors to logic—Dana Scott and Per Martin-Lof. But just about everyone else who made major contributions to mathematical logic—Peano, Cantor, Hilbert, Zermelo, Skolem, von Neumann, Gentzen, Church, Turing, Komolgorov, Kleene, Robinson, Curry, Cohen, Lawvere, and Girard are listed as mathematicians, not philosophers. To my knowledge, the only pure philosopher who has made a contribution to logic at the level of these people is Kripke, and I’m not sure that should count (because the bulk of his contribution was done before he got to college and picked philosophy as a major. :)
Quine, incidentally, made a minor contribution to mathematical logic with his idea of ‘stratified’ formulas in his ‘New Foundations’ version of set theory. Unfortunately, Quine’s theory was found to be inconsistent. But a few decades later, a fix was discovered and today some of the most interesting Computer Science work on higher-order logic uses a variant of Quine’s idea to avoid Girard’s paradox.
This sort of thing is less a fact about the world and more an artifact of the epistemological bias in English Wikipedia’s wording and application of its verifiability rules. en:wp’s way of thinking started at computer technology—as far as I can tell, the first field in which Wikipedia was the most useful encyclopedia—and went in concentric circles out from there (comp sci, maths, physics, the other sciences); work in the humanities less than a hundred or so years old gets screwed over regularly. This is because the verifiability rules have to more or less compress a degree’s worth of training in sifting through human-generated evidence into a few quickly-comprehensible paragraphs, which are then overly misapplied by teenage science geek rulebots who have an “ugh” reaction to fuzzy subjects.
This is admittedly a bit of an overgeneralisation, but this sort of thing is actually a serious problem with Wikipedia’s coverage of the humanities. (Which I’m currently researching with the assistance of upset academics in the area in order to make a suitable amount of targeted fuss about.)
tl;dr: that’s stronger evidence of how Wikipedia works than of how the world works.
Wikipedia is not authoritative (and recognizes this explicitly—hence the need to give citations). Here is a quote from Tarski himself:
That sounds like a good way to describe the LW ideal as well.
I believe Carnap is also primarily listed as a philosopher in wikipiedia, and he certainly counts as a major contributor to modern logic (although, of course, much of his work relates to mathamatics as well).
Quine’s set theory NF has not been shown to be inconsistent. Neither has it been proven consistent, even relative to large cardinals. This is actually a famous open problem (by the standards of set theory...)
However, NFU (New Foundations with Urelements) is consistent relative to ZF.
Quoting Wikipedia
So I was wrong—the fix came only one decade later.
Oh, that’s where the name is familiar from...
To any of the scientists and mathematics I know personal and have discussed this with, the lines between science and philosophy and mathematics and philosophy are not fuzzy at all. Mostly I have only heard of philosophers talked about the line being fuzzy or that philosophy encompasses mathematics and science. The philosophers that I have seen do this seem to do it because they disere the prestige that comes along with science and math’s success at changing the world.
Is experimental philosophy considered philosophy or science? Is formal epistemology considered philosophy or mathematics? Was Tarski doing math or philosophy? Is Stephen Hawking’s latest book philosophy or science? You can draw sharp lines if you want, but the world itself isn’t cut that way.
I missed this reply for some reason until I noticed it today.
My comment concerned what I have observed and not my personal belief and I tried to word it as such. Such as: To any of the scientists and mathematicians “I know personal.”(I am not going to repeat my spelling mistake), Mostly I have only heard of philosophers …
I do not evaluate whole disciplines at once. I do evaluate individual projects or experimental set ups. For this reason and that I was sharing what I considered an interesting observation not my personal belief, I do not think answering your questions will forward the conversation significantly.
To me the line between science and non-science is clear or can be made clear with further understanding. If society wants to draw a venn diagram where there is overlap between science and philosophy it is just one more case of non-orthogonal terminology. While non-orthogonal terminology is inefficient it is not he worst of society’s problems and should not be focused on unduly. I do think the line between science and non-science should be as sharp as possible and making it fuzzy is a bad thing for society/humanity.
As I pointed out before, the same is true for me of Quine. I don’t know if lukeprog means to include Mathematical Logic when he keeps saying not to read Quine, but that book was effectively my introduction to the subject, and I still hold it in high regard. It’s an elegant system with some important innovations, and features a particularly nice treatment of Gödel’s incompleteness theorem (one of his main objectives in writing the book). I don’t know if it’s the best book on mathematical logic there is (I doubt it), but it appeals to a certain kind of personality, and I would certainly recommend it to a young high-schooler over reading Principia Mathematica, for example.
No, it’s more than that, but only things of that level are useful philosophy. Other things are not philosophy or more like background intros.
Amy just arrived and I’ve got to start book-writing, but I’ll take one example from this list, the first one, so that I’m not picking and choosing; later if I’ve got a moment I’ll do some others, in the order listed.
Predicate logic.
Funny you should mention that.
There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI. And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.
Now considering that philosophers of the sort I inveighed against in “against modal logic” seem to talk and think like the GOFAI people and not like the model-theoretic people, I’m guessing that the GOFAI people made the terrible, horrible, no good, very bad mistake of getting their views of logic from the descendants of Bertrand Russell who still called themselves “philosophers” instead of those descendants who considered themselves part of the thriving edifice of mathematics.
Anyway. If you and I agree that philosophy is an extremely sick field, that there is no standardized repository of the good stuff, that it would be a desperate and terrible mistake for anyone to start their life studying philosophy before they had learned a lot of cognitive science and math and AI algorithms and plain old material science as explained by non-philosophers, and that it’s not worth my time to read through philosophy to pick out the good stuff even if there are a few small nuggets of goodness or competent people buried here and there, then I’m not sure we disagree on much—except this post sort of did seem to suggest that people ought to run out and read philosophy-qua-philosophy as written by professional philosophers, rather than this being a terrible mistake.
Will try to get to some of the other items, in order, later.
You may enjoy the following exchange between two philosophers and one mathematician.
Bertrand Russell, speaking of Godel’s incompleteness theorem, wrote:
Wittgenstein dismissed the theorem as trickery:
Godel replied:
According to Gleick (in The Information), the only person who understood Godel’s theorem when Godel first presented it was another mathematician, Neumann Janos, who moved to the USA and began presenting it wherever he went, by then calling himself John von Neumann.
The soundtrack for Godel’s incompleteness theorem should be, I think, the last couple minutes of ‘Ludus’ from Tabula Rasa by Arvo Part.
I’ve been wondering why von Neumann didn’t do much work in the foundations of mathematics. (It seems like something he should have been very interested in.) Your comment made me do some searching. It turns out:
ETA: Am I the only one who fantasizes about cloning a few dozen individuals from von Neumann’s DNA, teaching them rationality, and setting them to work on FAI? There must be some Everett branches where that is being done, right?
We’d need to inoculate the clones against vanity, it appears.
Interesting story. Thanks for sharing your findings.
von Neumann wanted to nuke the Eastern Bloc countries. He would probably ahve been more interested in a commie-killing AI.
Well spoken! :)
Of course, since this is a community blog, we can have it both ways. Those of us interested in philosophy can go out and read (and/or write) lots of it, and we’ll chuck the good stuff this way. No need for anyone to miss out.
Exactly. Like I did with my statistical prediction rules post.
I’d be curious to know what that “toxic view” was. My GOFAI academic advisor back in grad school swore by predicate logic. The only argument against that I ever heard was that proving or disproving something is undecidable (in theory) and frequently intractible (in practice).
Model theory as opposed to proof theory? What is it you think is great about model theory?
I have no idea what you are saying here. That “Against Modal Logic” posting, and some of your commentary following it strike me as one of your most bizarre and incomprehensible pieces of writing at OB. Looking at the karma and comments suggests that I am not alone in this assessment.
Somehow, you have picked up a very strange notion of what modal logic is all about. The whole field of hardware and software verification is based on modal logics. Modal logics largely solve the undecidability and intractability problems the bedeviled GOFAI approaches to these problems using predicate logic. Temporal logics are modal. Epistemic and game-theoretic logics are modal.
Or maybe it is just the philosophical approaches to modal logic that offended you. The classical modal logic of necessity and possibility. The puzzles over the Barcan formulas when you try to combine modality and quantification. Or maybe something bizarre involving zombies or Goedel/Anselm ontological proofs.
Whatever it was that poisoned your mind against modal logic, I hope it isn’t contagious. Modal logic is something that everyone should be exposed to, if they are exposed to logic at all. A classic introductory text: Robert Goldblatt: Logics of Time and Computation (pdf) is now available free online. I just got the current standard text from the library. It—Blackburn et al.: Modal Logic (textbook) - is also very good. And the standard reference work—Blackburn et al.: Handbook of Modal Logic—is outstanding (and available for less than $150 as Borders continues to go out of business :)
Reading Plantinga could poison almost anybody’s opinion of modal logic. :)
That is entirely possible. A five star review at the Amazon link you provided calls this “The classic work on the metaphysics of modality”. Another review there says:
Yet among the literally thousands of references in the three books I linked, Platinga is not even mentioned. A fact which pretty much demonstrates that modal logic has left mainstream philosophy behind. Modal logic (in the sense I am promoting) is a branch of logic, not a branch of metaphysics.
Yeah, we don’t disagree much on all those points.
I didn’t say in my original post that people should run out and start reading mainstream philosophy. If that’s what people got from it, then I’ll add some clarifications to my original post.
Instead, I said that mainstream philosophy has some useful things to offer, and shouldn’t be ignored. Which I think you agree with if you’ve benefited from the work of Bostrom and Dennett (including, via Drescher) and so on. But maybe you still disagree with it, for reasons that are forthcoming in your response to my other examples of mainstream philosophy contributions useful to Less Wrong.
But yeah, don’t let me keep you from your book!
As for predicate logic, I’ll have to take your word on that. I’ll ‘downgrade it’ in my list above.
FWIW, what I got from your original post was not “LW readers should all go out and start reading mainstream philosophy,” but rather “LW is part of a mainstream philosophical lineage, whether its members want to acknowledge that or not.”
Thanks for sharing. That too. :)
I’m part of Roger Bacon’s lineage too, and not ashamed of it either, but time passes and things improve and then there’s not much point in looking back.
Meh. Historical context can help put things in perspective. You’ve done that plenty of times in your own posts on Less Wrong. Again, you seem to be holding my post to a different standard of usefulness than your own posts. But like I said, I don’t recommend anybody actually read Quine.
Oftentimes you simply can’t understand what some theorem or experiment was for without at least knowing about its historical context. Take something as basic as calculus: if you’ve never heard the slightest thing about classical mechanics, what possible meaning could a derivative, integral, or differential equation have to you?
Does human nature improve, too?
What’s “human nature”?
Something that probably hasn’t changed much over the history of philosophy.
I’d very much like to see a post explaining that.
I’m not sure what “of that level” (of dissolving-to-algorithm) means, but I think I’ve demonstrated that quite a lot of useful stuff comes from mainstream philosophy, and indeed that a lot of mainstream philosophy is already being used by yourself and Less Wrong.
It seems a shame to leave this list with several useful cites as a comment, where it is likely to be missed. Not sure what to suggest—maybe append it to the main article?
I added a link to this list to the end of the original post.
I thought Chalmers was a newbie to all this—and showed it quite a bit. However, a definite step forward from zombies. Next, see if Penrose or Searle can be recruited.
Unfortunately for your argument in that sequence, very few actual physicists see the interpretation of quantum mechanics as a choice between “wavefunctions are real, and they collapse” and “wavefunctions are real, and they don’t”. I think life set you up for that choice because you got some of your early ideas about QM from Penrose, who does advocate a form of objective collapse theory. But the standard interpretation is that the wavefunction is not the objective state of the system, it is a tabulation of dispositional properties (that is philosophical terminology and would be unfamiliar to physicists, but it does express what the Copenhagen interpretation is about).
I might object to a lot of what physicists say about the meaning of quantum mechanics—probably the smartest ones are the informed realist agnostics like Gerard ’t Hooft, who know that an observer-independent objectivity ought to be restored but who also know just how hard that will be to achieve. But the interpretation of quantum mechanics is not an “elementary Bayesian problem”, nor is it an elementary problem of any sort. Given how deep the quantumness of the world goes, and the deep logical interconnectedness of things in physics, the correct explanation is probably one of the last fundamental facts about physics that we will figure out.
Unfortunately this is a typical example of the kind of thing that goes wrong in philosophy.
Our actual knowledge in this area is actually encapsulated by the equations of quantum mechanics. This is the bit we can test, and this is the bit we can reason about correctly, because we know what the rules are.
We then go on to ask what the real meaning of quantum mechanics is. Well, perhaps we should remind ourselves that what we actually know is in the equations of quantum mechanics, and in the tests we’ve made of them. Anything else we might go on to say might very well not be knowledge at all.
So in interpreting quantum mechanics, we tend to swap a language we can work with (maths) for another language which is more difficult (English). OK—there are some advantages in that we might achieve more of an intuitive feel by doing that, but it’s still a translation exercise.
Many worlds versus collapse? Putting it pointedly, the equations themselves don’t distinguish between a collapse and a superposition of correlated states. Why do I think that my ‘interpretation’ of quantum mechanics should do something else? But in fact I wouldn’t say either one is ‘correct’. They are both translations into English / common-sense-ese of something that’s actually best understood in its native mathematics.
Translation is good—it’s better than giving up and just “shutting up and calculating”. But the native truth is in the mathematics, not the English translation.
In other words, the Born probabilities are just numbers in the end. Their particular correlation with our anticipated experience is a linguistic artifact arising from a necessarily imperfect translation into English. Asking why we experience certain outcomes more frequently than others is good, but the answer is a lower-status kind of truth—the native truth is in the mathematics.
Yes they do. Experimentation doesn’t. Yet.
I believe I understand the warning here. The whole field of philosophy reminds me of the introduction to one of the first books on computer system development—The mythical man-month.
“No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits. In the mind’s eye one sees dinosaurs, mammoths, and saber-toothed tigers struggling against the grip of the tar. The fiercer the struggle, the more entangling the tar, and no beast is so strong or so skillful but that he ultimately sinks.
Large-system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it. Most have emerged with running systems—few have met goals, schedules, and budgets. Large and small, massive or wiry, team after team has become entangled in the tar. No one thing seems to cause the difficulty—any particular paw can be pulled away. But the accumulation of simultaneous and interacting factors brings slower and slower motion. Everyone seems to have been surprised by the stickiness of the problem, and it is hard to discern the nature of it. But we must try to understand it if we are to solve it.”
The tar pit, as the book goes on to describe, is information complexity, and far too many philosophers seem content to jump right into the middle of that morass, convinced they will be able to smash their way out. The problem is not the strength of their reason, but the lack of a solid foothold—everything is sticky and ill-defined, there is nothing solid to stand on. The result is much thrashing, but surprisingly little progress.
The key to progress, for nearly everyone, is to stay where you know solid ground is. Don’t jump in the tar pit unless you absolutely have no other choice. Logic is of very little help when you have no clear foundation to rest it on.
Yup! Most of analytic philosophy’s foundation has been intuition, and, well… thar’s yer problem right thar!
There has been some recent work in tackling the dependence on intuitions. The Experimental Philosophy (X-Phi) movement has been doing some very interesting stuff examining the role of intuition in philosophy, what intuitions are and to what extent they are useful.
One of the landmark experiments was doing surveys that showed cross cultural variation in responses to certain philosophical thought experiments, (for example in what cases someone is acting intentionally) e.g. Weinberg et al (2001). Which obviously presents a problem for any Philosophical argument that uses such intuitions as premises.
The next stage being explaining these variations, and how by acknowledging these issues you can remove biases, without going too far into skepticism to be useful. To caricature the problem, if I can’t trust certain of my intuitions I shouldn’t trust them in general. But then how can I trust very basic foundations, (such as: a statement cannot be simultaneously true and false) and from there build up to any argument.
This area seems particularly relevant to this discussion, as there has been definite progress in the very recent past, in a manner very consistent rationalist techniques and goals.
[This is my first LW post, so apologies for any lack of clarity or deviation from accepted practice]
Welcome to LW!
You’re right that there has been lots of progress on this issue in the recent past. Other resources include the book Rethinking Intuition, this issue of SPE, Brian Talbot’s dissertation, and more.
In fact I’m writing up a post on this subject, so if you have other resources to point me to, please do!
Weinberg is awesome. He’s going to be a big deal, I think.
But I’ve already pointed out that you do a lot more philosophy than just dissolution-to-algorithm. Dissolution to algorithm is not the only valuable thing to do in philosophy. Not all philosophical problems can be dissolved that way. Some philosophical problems turn out to be genuine problems that need an answer.
My claim that we shouldn’t ignore philosophy is already supported by the points I made about how vast swaths of the useful content on Less Wrong have been part of mainstream philosophy for decades.
I’m not going to argue that philosophy isn’t a terribly sick field, because it is a terribly sick field. Instead I’m arguing that you have already taken a great deal of value (directly or indirectly) from mainstream philosophy, and I gave more interesting examples than “metaphysical libertarianism is false” and “people are made of atoms.”
Even the best philosophy is this. Dan Dennett is devoted to arguing.
Of course, by Beisutsukai standards, philosophy is almost as good as physics. Both are very too slow.
Well, show me the power of LW then.
Since Quinean philosophy is just LW rationality but earlier, then that should settle it.
I find it likely that if someone were to trace the origins of LW rationality one would end up with Quine or someone similar. E.g. perhaps you read an essay by a Quinean philosopher when you were younger.
I doubt it. In fact I’m pretty certain that Quine had nothing to do with ‘the origins of LW rationality’. I came to many (though by no means all) of the same conclusions as Eliezer independently, some of them in primary school, and never heard of Quine until my early 20s. What I had read—and what it’s apparent Eliezer had read—was an enormous pile of hard science fiction, Feynman’s memoirs, every pop-science book and issue of New Scientist I could get my hands on and, later, Feynman’s Lectures In Physics. If you start out with a logical frame of mind, and fill that mind up with that kind of stuff, then the answers to certain questions come out as just “that’s obvious!” or “that’s a stupid question!” Enough of them did to me that I’m pretty certain that Eliezer also came to those conclusions (and the others he’s come to and written about) independently.
Timing argues otherwise. We don’t see Quine-style naturalists before Quine; we see plenty after Quine.
Eliezer doesn’t recognize and acknowledge the influence? He probably wouldn’t! People to a very large extent don’t recognize their influences. To give just a trivial example, I have often said something to someone, only to find them weeks later repeating back to me the very same thing, as if they had thought of it. To give another example, pick some random words from your vocabulary—words like “chimpanzee”, “enough”, “unlikely”. Which individual person taught you each of these words (probably by example), or which set of people? Do you remember? I don’t. I really have no idea where I first picked up any bit of my language, with occasional exceptions.
For the most part we don’t remember where exactly it was that we picked up this or that idea.
Of course, if Eliezer says he never read Quine, I don’t doubt that he never read Quine. But that doesn’t mean that he wasn’t influenced by Quine. Quine influenced a lot of people, who influenced a lot of other people, who influenced still more people, some of whom could very easily have influenced Eliezer without Eliezer having the slightest notion that the influence originated with Quine.
It’s hard to trace influence. What’s not so hard is to observe timing. Quine comes first—by decades.
Eliezer knows Bostrom pretty well and Bostrom is influenced by Quine, but I simply doubt the claim about no Quine style naturalists before Quine. Hard to cite non-citations though, so I can go on not believing you, but can’t really say much to support it.
Well, my own knowledge is spotty, and I have found that philosophy changes gradually, so that immediately before Quine I would expect you to find philosophers who in many ways anticipate a significant fraction of what Quine says. That said, I think that Quine genuinely originated much that was important. For example I think that his essay Two Dogmas of Empiricism contained a genuinely novel argument, and wasn’t merely a repeat of something someone had written before.
But let’s suppose, for the sake of argument, that Quine was not original at all, but was a student of Spline, and Spline was the actual originator of everything associated with Quine. I think that the essential point that Eliezer probably is the beneficiary of influence and is standing on the shoulders of giants is preserved, and the surrounding points are also preserved, only they are not attached specifically to Quine. I don’t think Quine specifically is that important to what lukeprog was saying. He was talking about a certain philosophical tradition which does not go back forever.
(EDIT: Quine was not Rapaport’s advisor; Hector-Neri Castaneda was.) William Rapaport, together with Stu Shapiro, applied Quine’s ideas on semantics and logic to knowledge representation and reasoning for artificial intelligence. Stu Shapiro edited the Encyclopedia of Artificial Intelligence, which may be the best survey ever made of symbolic artificial general intelligence. Bill and Stu referenced Quine in many of their papers, which have been widely read in artificial intelligence since the early 1980s.
There are many concepts from Stu and Bill’s representational principles that I find useful for dissolving philosophical problems. These include the concepts of intensional vs. extensional representation, deictic representations, belief spaces, and the unique variable binding rule. But I don’t know if any of these ideas originate with Quine, because I haven’t studied Quine. Bill and Stu also often cited Meinong and Carnap; I think many of Bill’s representational ideas came from Meinong.
A quick google of Quine shows that a paper that I’m currently making revisions on is essentially a disproof of Quine’s “indeterminacy of translation”.
Applying the above to Quine would seem to at least weakly contradict:
You seem to be singling out Quine as unique rather then just a link in a chain, unlike Eliezer and people who do not recognize their influences. This seems unlikely to me. Is this what you ment to communicate?
I don’t assume Quine to be any different from anyone else in recognizing his influences.
It is because I have no particular confidence in anyone recognizing their own influences that I turn to timing to help me answer the question of independent creation.
1) If a person is the first person to give public expression to an idea, then the chance is relatively high that he is the originator of the idea. It’s not completely certain, but it’s relatively high.
2) In contrast, if a person is not the first person to give public expression to an idea but is, say, the 437th person to do so, the first having done so fifty years before, then chances are relatively high that he picked up the idea from somewhere and didn’t remember picking it up. The fact that nobody expressed the idea before fifty years earlier suggests that the idea is pretty hard to come up with independently, because had it been easy, people would have been coming up with it all through history.
3) Finally, if a person is not the first person to give public expression to an idea but people have been giving public expression to the idea for as long as we have records, then the chance is relatively high once again that he independently rediscovered the idea, since it seems to be the sort of idea that is relatively easy to rediscover independently.
This can be true, but it is also possible that an idea may be hard to independently develop because the intellectual foundations have not yet been laid.
Ideas build on existing understandings, and once the groundwork has been done there may be a sudden eruption of independent-but-similar new ideas built on those foundations. They were only hard to come up with until that time.
Well, yes, but that’s essentially my point. What you’ve done is pointed out that the foundation might lie slightly before Quine. Indeed it might. But I don’t think this changes the essential idea. See here for discussion of this point.
Our view point diverge here. I do not agree the the first person to give public expression and be recorded for history, alone gives a high probability that he/she is the originator of the idea. You also said you factor in the originality of the idea. I only know Quine through what little I have read here and wikipedia and did not judge it original enough to be confident that the ideas he popularized could be thought of as his creation. It seems unlikely, I would however need more data to argue strongly oneway or another.
I didn’t say “high probability”, I said “relatively high”. By which I mean it is high relative to some baseline in which we don’t know anything, or relative to the second case. In other words, what I am saying is that if a person is the first to give public expression, this is evidence that he originated it.
Many others thought it highly original. Also, I’m not confident that you’re in a position to make that judgment. You would need to be pretty familiar with the chronology of ideas to make that call, and if you were, you would probably be familiar with Quine.
I do not think asserting this is not helpful to the conversation. I did not clam confidence, I have admitted to wanting more data. This is an opportunity to teach what you know and/or share resources. If you are not interested then I will put it on my list of things to do later.
By the way, it’s not that I disagree with your decision not to cover means vs. ends in CEV. I explained how it would be useful and clarifying. You then agreed that that insight from mainstream philosophy is useful for CEV, but you didn’t feel it was necessary to mention in the paper because your CEV paper didn’t go into enough detail to make it necessary. I don’t have a problem with that.
Given that your audience at least in some sense disagrees, you’d do well to use a more powerful argument than “it would be odd” (it would be a fine argument if you expected the audience’s intuitions to align with the statement, but it’s apparently not the case), especially given that your position suggests how to construct one: find an insight generated by mainstream philosophy that would be considered new and useful on LW (which would be most effective if presented/summarized in LW language), and describe the process that allowed you to find it in the literature.
On a separate note, I think finding a place for LW rationality in academic philosophy might be a good thing, but this step should be distinguished from the connotation that brings about usefulness of (closely located according to this placement) academic philosophy.
So, I agree denotationally with your post (along the lines of what you listed in this comment), but still disagree connotationally with the implication that standard philosophy is of much use (pending arguments that convince me otherwise, the disagreement itself is not that strong). I disagree strongly about the way in which this connotation feels to argue its case through this post, not presenting arguments that under its own assumptions should be available. I understand that you were probably unaware of this interpretation of your post (i.e. arguing for mainstream philosophy being useful, as opposed to laying out some groundwork in preparation for such argument), or consider it incorrect, but I would argue that you should’ve anticipated it and taken into account.
(I expect if you add a note at the beginning of the post to the effect that the point of this particular post is to locate LW philosophy in mainstream philosophy, perhaps to point out priority for some of the ideas, and edit the rest with that in mind, the connotational impact would somewhat dissipate, without changing the actual message. But given the discussion that has already taken place, it might be not worth doing.)
No, I didn’t take the time to make an argument.
But I am curious to discuss this with someone who doesn’t find it odd that mainstream philosophy could make useful contributions up until a certain point and then suddenly stop. That’s far from impossible, but I’d be curious to know what you think was cause the stop in useful progress. And when did that supposedly happen? In the 1960′s, after philosophy’s predicate logic and Tarskian truth-conditional theories of language were mature? In the 1980s? Around 2000?
The inability of philosophers to settle on a position on an issue and move on. It’s very difficult to make progress (ie additional useful contributions) if your job depends, not on moving forwards and generating new insights, but rather on going back and forth over old arguments. People like, e.g. Yudkowsky, whose job allows/requires him to devote almost all of his time to new research, would be much more productive- possibly, depending on the philosopher and non-philosopher in question, so much more productive that going back over philosophical arguments and positions isn’t very useful.
The time would depend on the field in question, of course; I’m no expert, but from an outsider’s perspective I feel like, e.g. linguistics and logic have had much more progress in recent decades than, e.g. philosophical consciousness studies or epistemology. (Again, no expert.) However, again, my view is less that useful philosophical contributions have stopped, and more that they’ve slowed to a crawl.
This is indeed why most philosophy is useless. But I’ve asserted that most philosophy is useless for a long time. This wouldn’t explain why philosophy would nevertheless make useful progress up until the 60s or 80s or 2000s and then suddenly stop. That suggestion remains to be explained.
(My apologies; I didn’t fully understand what you were asking for.)
First, it doesn’t claim that philosophy makes zero progress, just that science/AI research/etc. makes more. There were still broad swathes of knowledge (e.g. linguistics and psychology) that split off relatively late from philosophy, and in which philosophers were still making significant progress right up to the point where the point where they became sciences.
Second, philosophy has either been motivated by or freeriding off of science and math (e.g. to use your example, Frege’s development of predicate logic was motivated by his desire to place math on a more secure foundation.) But the main examples (that are generally cited elsewhere, at least) of modern integration or intercourse between philosophy and science/math/AI (e.g. Dennett, Drescher,, Pearl, etc.) have already been considered, so it’s reasonable to say that mainstream philosophy probably doesn’t have very much more to offer, let alone a “centralized repository of reductionist-grade naturalistic cognitive philosophy” of the sort Yudkowsky et al. are looking for.
Third, the low-hanging fruit would have been taken first; because philosophy doesn’t settle points and move on to entire new search spaces, it would get increasingly difficult to find new, unexplored ideas. While they could technically have moved on to explore new ideas anyways, it’s more difficult than sticking to established debates, feels awkward, and often leads people to start studying things not considered part of philosophy (e.g. Noam Chomsky or, to an extent, Alonzo Church.) Therefore, innovation/research would slow down as time went on. (And where philosophers have been willing to go out ahead and do completely original thinking, even where they’re not very influenced by science, LW has seemed to integrate their thinking; e.g. Parfit.)
(Btw, I don’t think anybody is claiming that all progress in philosophy had stopped; indeed, I explicitly stated that I thought that it hadn’t. I’ve already given four examples above of philosophers doing innovative work useful for LW.)
Yeah, I’m not sure we disagree on much. As you say, Less Wrong has already made use of some of the best of mainstream philosophy, though I think there’s still more to be gleaned.
Just now. As of today, I don’t expect to find useful stuff that I don’t already know in mainstream philosophy already written, commensurate with the effort necessary to dig it up (this situation could be improved by reducing the necessary effort, if there is indeed something in there to find). The marginal value of learning more existing math or cognitive science or machine learning for answering the same (philosophical) questions is greater. But future philosophy will undoubtedly bring new good insights, in time, absent defeaters.
So maybe your argument is not that mainstream philosophy has nothing useful to offer but instead just that it would take you more effort to dig it up than it’s worth? If so, I find that plausible. Like I said, I don’t think Eliezer should spend his time digging through mainstream philosophy. Digging through math books and AI books will be much more rewarding. I don’t know what your fields of expertise are, but I suspect digging through mainstream philosophy would not be the best use of your time, either.
I don’t believe that for the purposes of development of human rationality or FAI theory this should be on anyone’s worth-doing list for some time yet, before we can afford this kind of specialization to go after low-probability perks.
I expect that there is no existing work coming from philosophy useful-in-itself to an extent similar to Drescher’s Good and Real (and Drescher is/was an AI researcher), although it’s possible and it would be easy to make such work known to the community once it’s discovered. People on the lookout for these things could be useful.
I expect that reading a lot of related philosophy with a prepared mind (so that you don’t catch an anti-epistemic cold or death) would refine one’s understanding of many philosophical questions, but mostly not in the form of modular communicable insights, and not to a great degree (compared to background training from spending the same time studying math/AI, that is ways of thinking you learn apart from the subject matter). This limits the extent to which people specializing in studying potentially relevant philosophy can contribute.
Do you still think this was after reading my for starters list of mainstream philosophy contributions useful to Less Wrong? (below)
The low-hanging fruit is already gathered. That list (outside of AI/decision theory references) looks useful for discussing questions of priority and for gathering real-world data (where it refers to psychological experiments). Bostrom’s group and Drescher’s and Pearl’s work we already know, pointing these out is not a clear example of potential fruits of the quest for scholarship in philosophy (confusingly enough, but keep in mind the low-hanging fruit part, and the means for finding these being unrelated to scholarship in philosophy; also, being on the lookout for self-contained significant useful stuff is the kind of activity I was more optimistic about in my comment).
I don’t get it. When low-hanging fruit is covered on Less Wrong, it’s considered useful stuff. When low-hanging fruit comes from mainstream philosophy, it supposedly doesn’t help show that mainstream philosophy is useful. If that’s what’s going on, it’s a double standard, and a desperate attempt to “show” that mainstream philosophy isn’t useful.
Also, saying “Well, we already know about lots of mainstream philosophy that’s useful” is direct support for the central claim of my original post: That mainstream philosophy can be useful and shouldn’t be ignored.
Most of the stuff already written on Less Wrong is not useful to the present me in the same sense as philosophy isn’t, because I already learned what I expected to be the useful bits. I won’t be going on a quest for scholarship in Less Wrong either. And if I need to prepare an apprentice, I would give them some LW sequences and Good and Real first (on the philosophy side), and looking through mainstream philosophy won’t come up for a long time.
These two use cases are the ones that matter to me, what use case did you think about? Just intuitive “usefulness” is too unclear.
I agree that mainstream philosophy is far from the first or most important thing one can study.
The use case I’m particularly focused on is machine ethics for self-modifying superintelligence. That draws on a huge host of issues discussed at length in the mainstream literature, including much of the material I listed below, and also stuff I haven’t mentioned yet on the problems with reflective equilibrium (which CEV uses), consequentialism, and so on.
Well, I don’t share your expectation to learn useful stuff (on the philosophy side) about that, what you won’t find in AI textbooks, metaethics sequence, FHI papers, Good and Real, and other sources already located.
But some of the sources you just listed are from mainstream philosophy...
Also, I’m working on some stuff with regard to machine ethics for superintelligence, so I’ll be curious to find out if you find that useful as well.
Again, location of those sources was not (and were it otherwise, could well be not) a product of scholarship in mainstream philosophy, which subtracts from expectation of usefulness of the activity of reading new unknown stuff, which is an altogether different enterprise from reading the stuff that’s already known to be useful.
Do you mean, would I find your survey papers/book useful?
Probably not for me, maybe useful as material for new people to study, since it’s focused on this particular problem and so could collect the best relevant things you’ll find, depending on your standard of quality/relevance in selecting things to discuss. From what I saw of your first drafts and other articles, it’ll probably look more like a broad eclectic survey than useful-for-study lecture notes, which subtracts from that use case (but who knows).
Could catalyze conversation in academia or elsewhere though, or work as standard reference node for when you’re in a hurry and don’t want to dereference it.
(Compare with Chalmers’ paper, which is all fine in the general outline, generates a citation node, allows to introduce people from particular background to motivation for AGI-risks-related discussion, and has already initiated discussion in academia. But it’s not useful as study material, given available alternatives, nor does it say anything new.)
I think we agree on this so I’ll drop it. My original post claimed that mainstream philosophy makes useful contributions and should not be ignored, and you agree. We also agree that poring through the resources of mainstream philosophy is not the best use for pretty much anyone’s time.
As for my forthcoming work on machine ethics for superintelligence...
Yep. I want to write short, broad, well-cited overviews of the subjects relevant to Friendly AI, something that mostly has not yet been done.
Yes.
Right.
You’ve hit on most of the immediate goals of such work, though eventually my intention is to contribute to more of the cutting-edge stuff on Friendly AI, for example on how reflective equilibrium could be programmatically implemented in CEV. But that’s getting ahead of myself. Also, it’s doubtful that such work will actually materialize, because of the whole ‘not being independently wealthy’ problem I have. Research takes time, and I’ve got rent to pay.
What’s the low-hanging fruit mixed with? If I have a concentrated basket of low-hanging fruit, I call that an introductory textbook and I eat it. Extending the tortured metaphor, if I find too much bad fruit in the same basket, I shop for the same fruit at a different store.
Possibly helpful: a PDF version of Medin’s “Concepts and Categories.”
Hilary Kornblith was my advisor in grad school. He’s a cool dude.
I’m jealous! As you probably know, he is perhaps the leading defender of naturalized epistemology today.
Yup. I took a class on naturalized epistemology with him and got to listen to him talk about it in his nifty deep voice.
In undergrad I had to read Quine’s From Stimulus to Science for one of my philosophy classes, and I remember thinking “so what’s your point?” It seemed like what Quine really needed to do in that work was talk about induction, but he just skirted the issue. Have you read it? What’s your take? This was my only real exposure to Quine, so it’s probably part of the reason I dismissed him.
(It’s been a couple years since I’ve read it, so my memory may be off or I might have a different view if I read it now.)
I think Quine’s original works are hard to read, and not the best presentation of his own work. I recommend instead Quine: a guide for the perplexed.
In general, I think primary literature is over-recommended for initial learning. There is almost always better coverage of the subject in secondary literature.
FWIW, I think most of Quine’s original work that I’ve read is very nicely written and very clear. (Not all; for some reason I never really got on with Word and object.)
So I stumbled on these instructions:
Below is a list of the random articles I began from, and how long it took me to get to the Philosophy article.
Gymnasium Philippinum: 11
Brnakot: 23
Ohrenbach: 11
Vrijburg: 24
The Love Transcendent: 14
2010 in tennis: 13
Cross of All Nations: 24
List of teams and cyclists in the 2003 Tour de France: 14
Anton Ehmann: 19
Traveling carnival: 25
Frog: 13
Some, however, go into an immediate loop, for example between fringe theatre and alternative theatre.
Philosophy, of course, loops back on itself in just a few steps.
The Wikipedia version of the Collatz conjecture.
And if you click two more times starting from Philosophy, you get to Rationality. Rationality, of course, loops back to itself.
This is probably a result of what Elizer said about going up one level. The first link in wikipedia almost always goes up one level. Philosophy is the universal top level.
This seems to be saying that Quinean philosophy reached (correct) conclusions similar to Less Wrong, and that since it came first it probably influenced LW, directly or indirectly, and therefore, we should study Quinean philosophy. But this does not follow; if LW and Quine say the same things, and either LW is better written or we’ve already read it, then this is a reason not to read Quine, because of the duplication. The implied argument seems to be: Quine said these things first ⇒ Quine deserves prestige ⇒ We should read Quine. But prestige alone is not a sufficient reason to read anything.
No, I advise against reading Quine. I only said above that rationalists should not ignore mainstream (Quinean) philosophy. That’s a much weaker claim than the one you’ve attributed to me. Much of LW is better-written and more informed of the latest science than some of the best Quinean philosophy being written today.
What I’m claiming is that Quinean philosophy has made, and continues to make, useful contributions, and thus shouldn’t be ignored. I have some examples of useful contributions from Quinean philosophy here.
Necro-post, but I have to say I think a lot of people might have been/be talking past each other here. The question isn’t whether mainstream philosophy has useful insights to offer, the question is whether studying mainstream philosophy, i.e. “not ignoring it”, as you put it, is the best possible use of one’s time, as opposed to studying, say, AI research. There are opportunity costs for everything you do, and frankly, I’d say reading philosophy has (for me) too high of an opportunity cost and too low of an expected benefit to justify doing so. I don’t think I’d be mistaken in saying that this is probably true for many other LW readers as well.
I’m reminded of Caliph Omar’s apocryphal comments about the Library of Alexandria.
Perhaps the argument is more like this:
Quine said many things that we agree with
Some of these are non-obvious, its possible that we wouldn’t all have come up with them had we not had this community
Since we have not explicitly mentioned Quine before it is unlikely that we have already heard everything he came up with
Therefore reading Quine may reveal other useful, non-obvious insights, which we might take a long time to come up with on own
Therefore we should read Quine.
I don’t advocate reading Quine directly, but rather Quinean philosophy. For example Epistemology and the Psychology of Human Judgment, which reads like a series of Less Wrong blog posts, but covers lots of material not yet covered on Less Wrong. (I made a dent in this by transposing its coverage of statistical prediction rules into a Less Wrong post.)
And I don’t advocate it for everyone. Doing research in philosophy is my specialty, but I don’t think Eliezer should waste his time poring through philosophy journals for useful insights. Nor should most people. But then, most people won’t benefit from reading through books on algorithmic learning theory, either. That’s why we have divisions of labor and expertise. The thing I’m arguing against is Eliezer’s suggestion that people shouldn’t read philosophy at all outside of Less Wrong and AI books.
Some questions and thoughts about this:
How is it that ‘naturalism’ is the L.W. philosophy? I am not a naturalist, as I understand that term. What is the prospect of fair treatment for a dissenter to the L.W. orthodoxy?
Where does Quine talk about postmodernism, or debates about the meanings of terms like ‘knowledge’? If a reference is available it’d be appreciated.
What exactly do you understand by ‘naturalism’ - what does it commit you to? Pointing to Quine et. al. gives some indication, but it should not be assumed that there is no value, if being a naturalist is important to you, in trying to be more precise than this. One suggestion—still quite crude- is that there are only empirical and historical facts—there is no fact which doesn’t ultimately boil down to some collection of facts of these types. Plausibly such a view implies that there are no facts about rationality, insofar as rationality concerns what we ought to think and do, and this is not something implied solely by facts about the way the world measurably is and has been. Is this an acceptable consequence?
What exactly do you mean by ‘reductionism’? There are at least the following two possibilities:
1) There is some privileged set of basic physical laws (the domain of micro-physics), and all higher-order laws are in principle derivable from the members of the privileged set.
2) There is some set of basic concepts, and all higher order concepts are merely logical constructions of these.
Depending on how (1) is spelled-out, it is plausibly fairly trivial, and not something anyone of Quine’s generation could count as an innovative or courageous position.
Proposition (2), by contrast, is widely thought to be false. And surely one of the earliest and strongest criticisms of it is found in Quine’s own ‘Two Dogmas of Empiricism’.
Is there some third thesis under the name ‘reductionism’ which is neither close to trivial nor likely false, that you have in mind?
Concerning the role of shared intuition in philosophy. It’s an interesting subject, worthy of thought. But roughly, its value is no more than the sort of shared agreement relied upon in any other collaborative discipline. Just as in mathematics and physics you have to count on people to agree at some point that certain things are obvious, so too in philosophy. The difference is that in philosophy the things often are value judgments (carefully considered). Intuitions are of use in philosophy only to the extent that almost any rational person can be counted on to share them (Theory X implies it’s morally acceptable to kill a person in situation Y, intuitively it is not acceptable to kill a person in situation Y, therefore X is flawed). So I don’t see much to the claim they present a problem.
What do you take the claim that philosophy should be about cognitive science to imply? Do you think there should be no philosophy of language, no philosophy of mind, no aesthetics, no ethics, and on and on? Or do you really think that a complete understanding of the functioning of the brain would afford all of the answers to the questions these undertakings ask? I looked for an answer to this question in the post linked to as the source of this thought, but it is more a litany of prejudices and apparently uninformed generalizations than an argument. Not a model of rationality, at least.
Answering your questions in order...
Naturalism is presupposed by all or nearly all promoted Less Wrong posts, and certainly by all of Eliezer’s posts. I don’t know what the prospects are for fair treatment of dissenters.
Here is a quick overview of Quine on postmodernism. On Quine on useless debates about the meaning of terms, see Quine: A guide for the perplexed.
There are lots of meanings of naturalism, explored for example in Ritchie’s Understanding Naturalism. What I mean by ‘Quinean naturalism’ is summed up in the original post.
As for reductionism—I mean this kind of reductionism. The (2) kind of reductionism you mentioned is, of course, that second dogma of empiricism that Quine famously attacked (the 1st being analyticity). And that is not what I mean by reductionism.
I’m working on a post on intuitions where my positions on them will become clearer.
As for philosophy being about cognitive science, I’d have to write at great length to explain, I suppose. I’ll probably write a post on that for my blog Common Sense Atheism, at which time I’ll try to remember to come back here and link to it.
Since you don’t recommend reading Quine directly, what books / other resources would you recommend for someone who wants to read the main arguments for and against naturalism? My only knowledge of the subject comes from the sequences (and it seems like those mostly take it for granted).
If you feel you’re really not confident that people are made of atoms and so on, and you want an introduction to the standard debates over naturalism in mainstream philosophy, you can start with Ritchie’s Understanding Naturalism.
If you change your mind and want a quick and relatively readable tour from the man himself, try Quine’s From Stimulus to Science.
Can’t think of a succinct critique specifically of Quinean naturalism off hand. John McDowell articulates one in his Mind and World summarized here , but this text is not for the faint of heart. For a nice and relatively readable discussion of a couple of views of rationality, have a look at the chapter ‘Two conceptions of rationality’ in Hilary Putnam’s Reason Truth and History
Appreciate the reply. I think the point remains that what one means by ‘naturalism’ may have implications for what one can say about the nature of rationality, and this is something the denizens of this blog might care about.
One philosopher whose work it would be extremely interesting to see analyzed from a LW-style perspective is Max Stirner. Stirner has, in my opinion, been unfairly neglected in academic philosophy, and to the extent that his philosophy has been given attention, it was mostly in various nonsensical postmodernist and wannabe-avantgardist contexts. However, a straightforward reading of his original work is a very rewarding intellectual exercise, and I’d really like to see a serious no-nonsense discussion of his philosophy.
I assume you are familiar with this guy:
No, I’ve never heard of him, but thanks for the link. (For what that’s worth, I’m not a fan of Rand, and I’ve never read much from her. I am probably biased against her because of the behavior of her followers, but nevertheless, what little I’ve read from her writings seems rather incoherent.)
deconstructions which point out that hegel, stirner, and marx were disingenuous in their use of language and reasoning is useful.
I just read most of the comments in this thread and despite a general agreement that LW philosophy and Quinean philosophy have a whole lot in common, no one even suggested reading up on his critics. The lens that sees it’s own flaws is fine and good(although always a more difficult task than one might expect), but what about the lens that exposes itself to the eyes of others and asks them what flaws they see?
And, especially with the replication crisis, shouldn’t we all be gaining some awareness of the philosophical assumptions that inform our experimental and cognitive neuroscience? The interpreters of an FMRI scan or the designers or interpreters of an experimental study interpret their findings through the lenses of certain philosophical assumptions about mind, cognition, and metaphysics. Heck, our language of discourse that we use to talk about psychology is hugely influenced by a group of philosophers nearly everyone agrees was super wrong. The word idea can be traced straight back to Plato and refers to his Platonic forms and our English speaking naive psychology is, in large part, an internalization of Plato’s metaphysics, Newton’s physics, and Descartes dualisms. The words we use, and the ways it feels natural to combine them carry assumptions from philosophy whether we’re aware of them or not. And those assumptions trickle up into the ways a psychologist interprets a subjects actions, the ways the subject interprets and self-reports on their actions, and the ways a neuroscientist interprets a FMRI scan.
I find reading this post and the ensuing discussion quite interesting because I studied academic philosophy (both analytic and continental) for about 12 years at university. Then I changed course and moved into programming and math, and developed a strong interest thinking about AI safety.
I find this debate a bit strange. Academic philosophy has its problems, but it’s also a massive treasure trove of interesting ideas and rigorous arguments. I can understand the feeling of not wanting to get bogged down in the endless minutia of academic philosophizing in order to be able to say anything interesting about AI. On the other hand, I don’t quite agree that we should just re-invent the wheel completely and then look to the literature to find “philosophical nearest neighbor”. Imagine suggesting we do that with math. “Who cares about what all these mathematicians have written, just invent your own mathematical concepts from scratch and then look to find the nearest neighbor in the mathematical literature.” You could do that, but you’d be wasting a huge amount of time and energy re-discovering things that are already well understood in the appropriate field of study. I routinely find myself reading pseudo-philosophical debates among science/engineering types and thinking to myself, I wish they had read philosopher X on that topic so that their thinking would be clearer.
It seems that here on LW many people have a definition of “rationalist” that amounts to endorsing a specific set of philosophical positions or meta-theories (e.g., naturalism, Bayesianism, logical empiricism, reductionism, etc). In contrast, I think that the study of philosophy shows another way of understanding what it is to be a rational inquirer. It involves a sensitivity to reason and argument, a willingness to question one’s cherished assumptions, a willingness to be generous with one’s intellectual interlocutors. In other words, being rational means following a set of tacit norms for inquiry and dialogue rather than holding a specific set of beliefs or theories.
In this sense of reason does not involve a commitment to any specific meta-theory. Plato’s theory of the forms, however implausible it seems to us today, is just as much an expression of rationalism in the philosophical sense. It was a good-faith effort to try to make sense of reality according to best arguments and evidence of his day. For me, the greatest value of studying philosophy is that it teaches rational inquiry as a way of life. It shows us that all these different weird theories can be compatible with a shared commitment to reason as the path to truth.
Unfortunately, this shared commitment does break down in some places in the 19th and 20th centuries. With certain continental “philosophers” like Nietzsche, Derrida and Foucault their writing undermines the commitment to rational inquiry itself, and ends up being a lot of posturing and rhetoric. However, even on the continental side there are some philosophers who are committed to rational inquiry (my favourite being Merleau-Ponty who pioneered ideas of grounded intelligence that inspired certain approaches in RL research today).
I think it’s also worth noting that Nick Bostrum who helped found the field of AI safety is a straight-up Oxford trained analytic philosopher. In my Master’s program, I attended a talk he gave on Utilitarianism at Oxford back in 2007 before he was well known for AI related stuff.
Another philosopher who I think should get more attention in the AI-alignment discussion is Harry Frankfurt. He wrote brilliantly on value-alignment problem for humans (i.e., how do we ourselves align conflicting desires, values, interests, etc.).
I just wanted to thank you for your continuous work, and especially for explicitly sparing us the work of sifting through all that philosophical tradition.
Isn’t this missing our evolutionary prior, our instinct? And what argument can be made against privileged philosophical insight? That the map is not the territory is only partly true as the map is part of the territory and everyone can explore and alter their map and therefore alter the territory. That is what is happening when philosophers and mathematicians explore the abstract landscapes of mathematics and language. Our world of thought, our imagination and psyche are very important parts of the territory. People try to fathom the constraints of their inner world by reducing their contemplations to be solely about extrasensory perceptions. We might be able to understand everything from the outside but we’ll always only gain the algorithmic understanding that way. You might object that Mary won’t learn anything from experiencing the algorithm, but if Mary does indeed know everything about a given phenomenon then by definition she also knows how the algorithm feels from the inside. Understanding something means to assimilate a model of what is to be understood. Understanding something completely means to be able to compute its algorithm, it means to incorporate not just a model of something or the static description of an algorithm, it means to become the algorithm entirely. And that is what philosophers are doing, they are not trying to dissolve human nature by formulating a static description but evoke the dynamic state sequence from the human machine by computing the algorithm.
And from what did you learn about the idea of evolution, and whence observe anything connected to the abstraction ‘instinct’?
What I meant by ‘evolutionary prior’ is all information available to us that are not a result of the stimulation of sensory receptors. This can be genetically programmed memory or the architecture of the computational substrate. It doesn’t matter if we were shaped by evolution, even a Boltzmann brain will contain extrasensory information. The basic point I tried to make is that philosophy can partly be seen as an art that tries to fathom the possibilities and constraints of our minds by experiencing, i.e. computing, the human algorithm. I am not trying to argue that the way Yudkowsky wants to fathom human nature is wrong, it is indeed the superior way of gaining functional knowledge. But the behavior of the human algorithm is sufficiently complicated that it is not possible to work out the behavior by other means than performing the computation. In other words, the dynamic state sequence that can be evoked from the human machine by computing the algorithm is not merely complicated but complex, that is unpredictable. Philosophers are computing the algorithm to learn more about its behavior. Philosophers also study the behavior of systems of human algorithms by computing the interaction with other philosophers. Doing so philosophers are able to investigate the emergent phenomena of those systems. All those phenomena reduce entirely to the physical facts but physical systems can have properties that their parts alone do not. Those properties can not be predicted in advance but only discovered by computing the system.
I suspect some philosophers of mind would reply ‘Philosophy of mind makes AI studies honest’. Also, if you are averse to recommending the reading of Quine, at least recommend some of his critics. If your views are Quinean, surely you should at least have a look at the vast anti-Quinean literature out there?
Interestingly, that sounds a lot like (an important part of) how linguistics research works. Of course, it’s a problem for philosophy because it doesn’t see itself as a cognitive science like linguistics does, and it endeavours to do other things with this approach than deducing the rules of the system that generates the intuitions.
Such as?
Such as, in this ancient example, understanding ‘the nature of justice’, as if that were some objective phenomenon.
I’m not up to date on philosophy since covering the drop-dead basics in high school seven years ago, so ignore this if modern philosophy has explicitly reduced itself to the cognitive science of understanding the mental machinery that underlies our intuitions. From what snippets I hear, though, I don’t get that impression.
I don’t know what example you are referring to, or what you mean by “some objective phenomenon”. Justice clearly isn’t something you can measure in the laboratory. It is not clearly subjective either, since people are either imprisoned or not, they can be inprisoned-for-me but free-for-you. Philosophical questions often fall into such a grey area. Socratic discussions assume that people intersubjectviely have the same concept in mind, or are capable of converging on an improved definition intersubjectively. Neither assumption is unreasonable.
I’m quoting the essay.
Justice is subjective, after a fashion. It is a set of intuitions, systematised into commonly accepted laws, which can change over time. Whether people are in jail isn’t part of the phenomenon ‘justice’, only of how people act on these subjective ideas. On the other hand, people can be rightly-imprisoned-for-me and undeservedly-imprisoned-for-you if we disagree about the law.
An intersubjective fashion. It’s not one person’s preference. If justice were objective, Socrates should have tested it in the laboratory instead of discussing it. If it were subjective, he needn’t have invited his friends over—he didn’t need them to tell him who makes his favourite restina. Justice can only be intersubjective because it regulates interactions among people, and the appropirate way to decide intersubjective issues is to solicit a range of opinion from a number of people and iron out the bumps. I don’t see anything broken in what Socrates was doing. We still do it, in the forms of ethics committees, think tanks and panel discussions.
I can’t see what distinction you are drawing. If there is s a phenomenon of justice, it is an intersubjective way of combining preferences that fulfils certain criteria, such as being the same for all, in order to regulate certain concrete events, such as who lands in jail. So who lands in jail is in fact part of the intersubjective idea.
I don’t see why. If I think 2+2=5, then “2+2=5” isn’t true-for-me, it is just wrong. Disagreement is not a sufficient condition for something’s being properly subjective.
Someone asked me via email:
I figured my answer will be helpful for others, too, so I’ll post it here:
Its role in the sequences seems much simpler: if you look at human minds as devices for producing correct (winning) decisions (beliefs), the “map” aspect of the brain is effective to the extent/because the state of the brain corresponds to the state of the territory. This is not correspondence theory of truth, it’s theory of (arranging) coincidence between correct decisions/beliefs (things defined in terms of the territory) and actual decisions/beliefs (made by the brain involving its “map” aspect), that points out that it normally takes physical reasons to correlate the two.
I like how you’ve put this. This is roughly how I see things, and what I thought was intended by The Simple Truth, but recently someone pointed me to a post where Eliezer seems to endorse the correspondence theory instead of the thing you said (which I’m tempted to classify as a pragmatist theory of truth, but it doesn’t matter).
My point is that the role of map/territory distinction is not specifically to illustrate the correspondence theory of truth. I don’t see how the linked post disagrees with what I said, as its subject matter is truth (among other things), and I didn’t talk about truth, instead I said some apparently true things about the process of forming beliefs and decisions, as seen “from the outside”. If we then mark the beliefs that correspond to territory, those fulfilling their epistemic role, as “true”, correspondence theory of truth naturally follows.
Now that I’ve actually read some Quinean naturalism (and similar stuff), I’m gonna’ downvote. Better than the surrounding papers, okay. Probably important steps forward. But not all that good, in the light of decades worth of higher expectations. The only associated things I would exempt (that I’ve read) are some of Pierce’s writings.
Socrates definitely drew on some questionable intuitions in Plato’s dialogues, but I think justice is a particularly slippery concept, in that it requires a prior conception of both the law and the good.
Legality is, for any sufficiently well-written laws, a purely factual matter. Does x break the law? Yes/no. Morality is, for any particular sufficiently well-written moral system, also a factual matter, but with more degrees of freedom. Is x good? Is x optimally good? Is x bad, but the best available option? Is the moral law, as written, itself good or optimally good under it’s own definition of goodness?
Justice flings this all together in one pot: What ought the law to be? How ought the law to be enforced? How should we feel about the execution of justice?
This may be relatively clear to most readers of this site who grew up aware that good/evil and law/chaos are largely orthogonal, but in my experience many (most?) people have significant confusion/crossover between “illegal” and “immoral.”