How not to be a Naïve Computationalist
Meta-Proposal of which this entry is a subset:
The Shortcut Reading Series is a series of less wrong posts that should say what are the minimal readings, as opposed to the normal curriculum, that one ought to read to grasp most of the state of the art conceptions of humans about a particular topic. Time is finite, there is only so much one person can read and thus we need to find the geodesic path to epistemic enlightenment and show it to Less Wrong readers.
Exemplar:
“How not to be a Naïve Computationalist”, the Shortcut Reading Series post in philosophy of mind and language:
This post’s raison d’etre is to be a guide for the minimal amount of philosophy of language and mind necessary for someone who ends up thinking the world and the mind are computable (such as Tegmark, Yudkowsky, Hofstadter, Dennett and many of yourselves) The desired feature which they have achieved, and you soon will, is to be able to state reasons, debugg opponents and understand different paradigms, as opposed to just thinking that it’s 0 and 1’s all the way down and not being able to say why.
This post is not about Continental/Historical Philosophy, about that there have been recommendations in http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/
The order is designed.
What is sine qua non, absolutely necessary, is in bold and OR means you only have to read one, the second one being more awesome and complex.
Language and Mind:
37 Ways words can be Wrong—Yudkowsky
Darwin Dangerous Idea Chapters 3,5, 11, 12 and 14 - Daniel Dennett
On Denoting—Bertrand Russell
On What There Is—Quine
Two Dogmas of Empiricism—Quine
Namind and Necessity—Kripke OR Two Dimensional Semantics—David Chalmers
“Is Personal Identity What Matters?”—Derek Parfit
Breakdown of Will—Part Two (don’t read part 3) George Ainslie
Concepts of Consciousness 2003 - Ned Block
Attitudes de dicto and de se—David Lewis- Phil Papers 1
General Semantics—David Lewis—Phil Papers 1
The Stuff of Thought, Chapter 3 “Fifty Thousand Innate Concepts”—Steve Pinker
Beyond Belief—Daniel Dennett in Intentional Stance
The Content and Epistemology of Phenomenal Belief—David Chalmers
Quining Qualia OR I Am a Strange Loop OR Consciousness Explained—Dan & Doug
Intentionality—Pierre Jacob—Stanford Encyclopedia Phil
Philosophy in the Flesh—Lakoff & Johnson—Chap 3,4, 12, 21,24 and 25.
What you cannot find here you probably will on Google or Library.nu (if anyone has a link to Beyond Belief (EDIT: Found it!), post it, it is the only hard to find one)
Congratulations, you are now officially free from the Naïve philosophical computationalism that underlies part of the Less Wrong Community. Your computationalism is now wise and well informed.
Feel free now to delve into some interesting computational proposals such as
Consciousness as Integrated Information—Giulio Tononi
What is Thought—Eric Baum
Good and Real—Gary Drescher
The Mathematical Universe Hipothesis—Max Tegmark
Dealing with complexity is an inefficient and unnecessary waste of time, attention and mental energy. There is never any justification for things being complex when they could be simple. - Edward de Bono
There are many realms and domains in which the quote above should not be praised. But I think I have all philosophy majors with me when I say that there must be a simpler way to get to the knowledge level we reach upon graduation.
Finally, having wasted substantial amounts of time reading those parts that should not be read of philosophy, and not intending to do the same mistake in other areas, I ask you to publish a selection of readings in your area of expertise, The Sequences are a major rationality shortcut, and we need more of that kind.
- 31 May 2013 4:40 UTC; 10 points) 's comment on Curriculum suggestions for someone looking to teach themselves contemporary philosophy by (
- Is there any way to avoid Post Narcissism? [with Video link] by 28 May 2013 22:07 UTC; 8 points) (
- Complement Luke’s Mega-Course for Aspiring Philosophers by 7 Dec 2012 6:14 UTC; 8 points) (
- My book: Simulating Dennett—This Wednesday in Sao Paulo by 17 Mar 2014 8:15 UTC; 6 points) (
- 6 Nov 2014 7:57 UTC; 3 points) 's comment on Superintelligence 8: Cognitive superpowers by (
- Compiling my writings for Lesswrong and others. by 22 Jul 2014 8:11 UTC; 3 points) (
- 27 May 2015 17:31 UTC; 1 point) 's comment on Questions about Effective Altruism in academia (RAQ) by (EA Forum;
- 26 Sep 2014 18:14 UTC; 1 point) 's comment on Books on consciousness? by (
- 2 Mar 2012 17:17 UTC; 0 points) 's comment on Troubles With CEV Part2 - CEV Sequence by (
- 14 Jun 2011 9:23 UTC; 0 points) 's comment on Rationality Quotes: June 2011 by (
- 3 Sep 2013 14:57 UTC; 0 points) 's comment on Repository repository by (
- 4 Oct 2012 4:40 UTC; -2 points) 's comment on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? by (
- 6 Dec 2012 3:24 UTC; -4 points) 's comment on Train Philosophers with Pearl and Kahneman, not Plato and Kant by (
I like the irreverent attitude exemplified by this posting. But the posting might have been improved if it had attempted to provide a short characterization of what a Naïve Computationalist believes that a wise and well-informed Computationalist does not. And vise versa.
Wow! Thanks for doing this regarding Computationalism. I don’t really have an area of expertise such that I could produce a list like yours, but I can think of some areas where such a list would be very helpful (to me, at least).
How not to be a Naive Consequentialist: The ethical thinking here is a bit … hmmm, lets say … parochial because it has never confronted the best thinking of other schools of ethics (i.e. deontological ethics, virtue ethics, and contractarian ethics). Neither has it really addressed foundational issues within consequentialism itself—issues addressed by people like Sen and Harsanyi. It would be best if we could discuss our own ethical viewpoints in terms that other people can understand.
How not to be a Naive Evolutionist: Apart from some overenthusiasm for the just-so-stories of the less reputable parts of evolutionary psychology, LessWrongers seem to have a fairly good grasp of the philosophical implications of Darwinian evolution. But I have noticed some lack of awareness of some of the recent political/intellectual history of the field, plus a bit of the usual difficulty that outsiders have in separating the headline-grabbing pop science from the real science.
How not to be a Naïve Realist/Reductionist: This one is probably controversial, but what I have in mind here is not to overthrow realism and reductionism, but rather to provide some exposure to the saner criticisms of these philosophical doctrines. What naturalism meant before Quine. What emergence means to Philip Anderson. What is meant by scientific anti-realism and why it isn’t a totally insane viewpoint.
How not to be naive about logic, models, and proof theory—particularly as they relate to proving program correctness and program equivalence. The importance of these topics to the topic of FAI are obvious. Yet a basic knowlege of the techniques and terminology of this field are sorely lacking in many of us. It is not rocket science.
We could probably also use such reading lists in the fields of machine learning, game theory, and Bayesian statistics. Perhaps also GOFAI. And at least one reading list on practical rationality.
I like and agree with the implication that reading these is needed to defend computationalism, but not needed to believe for correct reasons that computationalism is true.
What is naive computationalism? Would you classify Giles’s questionnaire or my reply to dfranke as naive? If the answer is “yes” for either of them, could you point out where it goes wrong?
To most people, it does not mean that. I suggest you use OR where you now use XOR.
Use the “<==”. Since the second implies the first.
I am just a naïve computationalist, though.
This here is a good example of how computationalist thoght may get lost sometimes, even though it was a joke. It is explanatory: The second does not imply the first, it is more awesome, it will give you more information, but not necessarily the same information. It will only be the same *for the purposes of not becoming a Naïve computationalist. The implication could get lost in context (not that the XOR couldn’t)
Speaking of thoughts getting lost …
You are using the word XOR incorrectly. It has an accepted meaning—it is not a word that is available for you to attach a private definition to. The actual meaning of a recommendation to “do A XOR B” is “do A or do B but don’t do both because whichever one you do second will undo the good effect of whichever one you did first”. If the meaning you wish to convey is “do A or do B or do both (though both is not necessary)” then you should use the word OR. At least in English.
Please correct this. For some reason, it offends me far more than would a picture of Mohammed.
To expand on that point, I should also point out that that more generally “do A_1 XOR A_2 … XOR A_n” means not “do precisely one of A_1 through A_n”, but rather “do an odd number of A_1 through A_n”.
Ok, I need then to know what established symbol means: “do precisely one of A_1 through A_n”
“Do precisely one of A_1 through A_n”. There’s nothing wrong with writing things out longhand.
(Except, as Perplexed points out, I don’t think that’s really what you mean—would it really be such a problem to do more than one?)
If the purpose is to be mininmal, yes.
http://en.wikipedia.org/wiki/Exclusive_or
“one or the other but not both.” From Wikipedia.
I begin to think I was not that wrong...…
Your use may be technically correct but it is very misleading. If you simply say “do A or B”, it’s clear that doing one is sufficient so a person who wants to save effort will only do one. Specifying “xor” therefore suggests that there is some additional harm to doing both, beyond nonminimality.
Do A ∈{A1, A2, … An} ?
Although in this case, I don’t think there’s any harm to come from doing more than one of A1 through An; wouldn’t “at least one” work better?
I got that usage of ‘XOR’ from one of Pinker’s books I believe. But given my utilitarianism, I’m postponing my knowledge so that those who suffer mohammed-level pain stop experiencing it, and using simple ‘OR’
Thanks for the (potentially) very useful post. Upvoted with pleasure, as the best thing in while fitting the criterion: “I’d like to see more posts like this on LessWrong.”
Certainly not all, but I’m with you.
Be cheered that you’ve been through the worst of it—few other fields have so very many ” parts that should not be read ” yet nonetheless have so many parts that should be read.
My eyes hurt! What on earth happened to the formatting?
Microsoft Word, probably.
Not specifically. Microsoft Word is actually a way to ensure correct formatting—you can make sure the in-site editor works like you intend by writing your article in Word first and the copying over. I think the formatting deviation here is due to other errors (even if Microsoft Word was also used).
I started with “37 Ways words can be Wrong” and didn’t find much immediate benefit from the exercise in the way of being less naive about Computationalism.
I scroll down your list and see there’s a lot about language. Is such an extensive education in language quite necessary? (Or do I need to keep reading to see?)
Most of those readings will tell you nothing about computationalism directly, they will broaden your vision of the world in such a way as to eventually make your kind of reasoning converge into a better rationalist about issues related to computationalism.
The main reason I put the personal identity text there for instance, is to cause a transition from thinking frequently that something (like personal identity) will carry over to it’s closest continuator in a new slightly different scenario, to a more gradualist thinking, in which sometimes things may dissolve in any dimension you try to vary them. In a future in which some folks try to build FAI, this will be of extreme importance when considering the values dimension. For instance, will what we want to protect be preserved if we extrapolate human intelligence? This is my current line of work (any input welcome).
Does this mean you’re thinking about uploaded people here? I think that is an important research question.
I was thinking about CEV, but yes, the same question applies to uploads (and is not the classic upload issue).
Good that you find it important. I’m going to dedicate some time to that research.
Does anyone have good reasons to say it is not a good research avenue?
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can’t preserve high levels of empathy and share the same “computational space”. If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I’m not a homophobe), but I’m able to extrapolate it from my own values and be motivated to respect its preservation as if it were mine (How? Simulating it. As a highly empathic person, I can say that it hurts to make others miserable. So it works as an intrinsic motivation and goal)
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can’t preserve high levels of empathy and share the same “computational space”. If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I’m not a homophobe), but I’m able to extrapolate it from my own values and be motivated to respect it’s preservation and if it were mine (How? Dunno, but as a highly empathic person, I can say that it hurts to make others miserable).
Thank you for this post. Would you please make the font easier to read?
I don’t think it is a known thing why some articles come out with weird fonts/sizes. I looked in the editor, and there’s a lot of HTML formatting tags before every individual paragraph in this post, but no obvious way to affect them one way or the other in the WYSIWYG. Manually deleting all the formatting tags would involve removing so much text that I’d be afraid to carve away actual content, so I’m leaving it for the original author, who will be more likely to notice something missing.
It probably comes from cut and pasting from an external rich text editor.
It was complicated, but fixed.
That happens to me sometimes...if I write a post in Word and then copy-paste, sometimes the last paragraph comes out in a different font than the rest, or the whole of it is in a weird font. I think most of this site is in Arial or something similar, but I usually write in Times, so that might have something to do with it.
When I run into this situation, which is fairly often, I use a text editor in between. I paste the text to the editor, then copy it from the editor. This removes formatting, fonts, etc. It has to be a real text editor, not something that allows formatting.
Specific software such as Word may have the ability to copy text only.
I wrote some regexp scripts and removed the extra tags. Send me a message if something like this happens in the future and I don’t notice.
Aye aye.
deleted
Question, hopefully one betraying my busynes, not laziness :D......can you watch the BBC production of Darwin’s Dangerous Idea instead of reading it? And if so, which sections correspond?
Thanks loads.