Rationality Quotes: February 2010
A monthly thread for posting rationality-related quotes you’ve seen recently (or had stored in your quotesfile for ages).
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments/posts on LW/OB.
No more than 5 quotes per person per monthly thread, please.
ETA: It would seem that rationality quotes are no longer desired. After several days this thread stands voted into the negatives. Wolud whoever chose to to downvote this below 0 would care to express their disapproval of the regular quotes tradition more explicitly? Or perhaps they may like to browse around for some alternative posts that they could downvote instead of this one? Or, since we’re in the business of quotation, they could “come on if they think they’re hard enough!”
From a BBC interview with a retiring Oxford Don:
Don: “Up until the age of 25, I believed that ‘invective’ was a synonym for ‘urine’.”
BBC: “Why ever would you have thought that?”
Don: “During my childhood, I read many of the Edgar Rice Burroughs ‘Tarzan’ stories, and in those books, whenever a lion wandered into a clearing, the monkeys would leap into the trees and ‘cast streams of invective upon the lion’s head.’”
BBC: “But, surely sir, you now know the meaning of the word.”
Don: “Yes, but I do wonder under what other misapprehensions I continue to labour.”
On utility:
--bash.org
also from bash.org (made as a reply since I’m already at my 5-quote limit):
The analysis fails to take into account the cost of buying and raising of cats.
Or at least of maintaining friendships with people who have cats.
While hilarious, and I upvoted it, I doubt economists would agree with the stated cost of the catpenny game, nor with its comparability to other forms of entertainment.
ETA: and catpenny seems likely to be subject to drastically diminishing returns.
I seriously can’t decide if catpennies have diminishing marginal utility or not!
We should test this! Anyone got a cat? I’ve got 9 pennies I don’t want.
Don’t forget to consider the negative utility of an angry cat attacking the catpenny player, which will surely happen after x catpennies.
Anyone going to go looking for x? It would of course have to be statistical distribution, varying with cat age, breed, and so on.
Also, how hard you’ve managed to hit it with the pennies. I think you have to try to maximise the damage:irateness ration.
Doesn’t catpenny cost less than a penny (in terms of dollars spent)? You can recover most, if not all, of the pennies.
also, don’t forget to consider that the cat is conscious and might not like getting hit by pennies :)
Given yesterday’s xkcd, I note that Google has no hits for “strip catpennies.”
Huh; I know someone who made this same suggestions, only he was talking about throwing the pennies at people… I suppose it’s worth noting that in this case, the pennies are not as recoverable.
-- Raving Atheist, found via the Black Belt Bayesian blog (props to Steven)
maybe ‘tolerance’ simply means: “the cost of settling our differences outweighs the benefits”
That makes sense, but knowing in advance which outweighs which is problematic.
Which suggests rationality may not be as purely instrumental as we would like to think. It can only practically happen between people who already have generally low preferences over beliefs, those who want truth for its own sake.
“Intuition only works in situations where neurology and evolution has pre-equipped us with a good set of basic-level categories. That works for dealing with other humans, and for throwing things, and for a bunch of other things that do not, unfortunately, include constructing viable philosophies.”
-- Eric S. Raymond
-- Bryan Caplan
Great quote, though it took me a minute to parse. I think it’s the dashes that did it. Wouldn’t this read a lot better with commas instead?
If you can’t feel secure (and teach your children to feel secure) in nightmare scenarios with 1-in-610,000 odds, the problem isn’t the world. It’s you.
It works better with longer dashes—I always get thrown off when someone uses a single hyphen instead of faking an en dash with two hyphens surrounded by spaces.
Should be an em-dash, really. You can get em-dashes — on a mac, at least — by typing option–shift–minus-sign.
Some people prefer en-dashes – option-hyphen, alt-0150 – when you’re surrounding them with spaces, only using em-dashes without the spaces, but I don’t think it’s important. Hyphens are more Lynx-friendly, so I often use those.
Steven Pinker—The Blank Slate: The Modern Denial of Human Nature
I love this quote, and I plan to get around to reading this book soon, but I figured I should post this article which seems to say that we do have an innate instinct for numbers, addition, and subtraction, even if we may not completely realize it right away.
“You don’t use science to show that you’re right, you use science to become right.”—Randall Munroe, in the alt-text of xkcd 701
“In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.” GK Chesterton
-- G.K. Chesterton
A “friend” of mine was a fan of using this to argue for Christianity. The idea of never changing one’s mind doesn’t seem very rational.
Your friend must be pretty hungry by now.
“Who are you?”
“Who am I? I’m not quite sure.”
“I admire an open mind. My own is closed upon the conviction that I am Shardovan, the librarian of Castrovalva.”
-- Doctor Who
To be fair, G.K. Chesterton was probably also using this to argue for Christianity.
-- Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing
-- Slavoj Žižek @ Google
I saw the Rumsfeld quote, immediately thought of that Zizek line and then instantly concluded no one at Less Wrong would like to hear from Slavoj Zizek. This must be the first time a continental philosopher has received upvotes here. I’m fascinated.
He noticed the blatantly missing corner in the field of possibilities and replied to it intelligibly. I have no idea what a continental philosopher is, much less who Zizek is, but the quote is appropriate.
Did no one check out the video?
I didn’t—watching just now, as suggested by your comment, I bailed at the German type of toilet.
Alfred Mander—Logic for the Millions
-- Peter’s Evil Overlord List on how to be a less wrong fictional villain
Hear, hear! :D
Yeah, let me do it.
Sentient?
Fair ’nuff.
On parsimony:
--John von Neumann, at the first national meeting of the Association for Computing Machinery
—H.P. Lovecraft, clearly talking about cryonic preservation
Yeats, on what he’ll do when no longer “fastened to a dying animal”.
Definitely cryonics. I never really understood why this phrasing applies to Cthulhu, although I haven’t read very much of Lovecraft.
The Elder Gods and other nameless menaces are portrayed as unphysical quasi-extra-dimensional beings from elsewhere; as such, death does not apply to them. Astronomical/universal conditions merely allow or disallow their projects.
But what are strange aeons? Why will Death die?
Reading Lovecraft: You’re doing it wrong.
Strange eons are many and long aeons; HPL thinks in a steady state cosmos where the universe is indefinitely old. Death will die in the Christian phrasing—the non-human menaces grow more powerful over time and their ‘sleep’ periods will shrink.
“If the tool you have is a hammer, make the problem look like a nail.”
Steven W. Smith, The Scientist and Engineer’s Guide to Digital Signal Processing
--Bryan Caplan
Reference: Guardians of Ayn Rand
-- The Oxford Handbook of Clinical Medicine
Bruce Schneier
Presumably not per unit exposure, which is the relevant measure when you’re near a pig or shark. If he’s talking about abstract worry, then he might have a point.
I’ve decided to spend today abstractly worrying about sharks.
Fake Jedi sharks, no doubt.
Is today silly comment day?
But what’s the unit exposure? Does the exposure related to ocean swimming match the exposure of camping in Michigan wilderness? You have a point, though. Of course, most people should worry about neither pig nor shark attacks.
Ok, but most people who are more worried about sharks than pigs are going on vacation to the beach and don’t work on a swine farm. And I don’t think those people are wrong to worry about sharks more than pigs. It is also quite likely that swine farmers do worry about pigs more than the rest of us.
I googled for it but didn’t find any evidence for pigs killing people.
Googled it too. You need to expand “pigs” to include “wild boar”.
Still this “six times as many death from pigs as from sharks” sounds suspiciously like an urban legend, the precise multiplier implies that there should be a well known source and not finding it is a hint. The numbers are small enough that the ratio should be all over the map.
Average Number of Deaths per Year in the U.S
Bee/Wasp 53
Dogs 31
Spider 6.5
Rattlesnake 5.5
Mountain lion 1
Shark 1
Alligator 0.3
Bear 0.5
Scorpion 0.5
Centipede 0.5
Elephant 0.25
Wolf 0.1
Horse 20
Bull 3
Here
Not entirely sure of the accuracy of these, but still. I think 31x as many killed by dogs as by sharks is a much more important figure than deaths from pigs.
Looks like a slight mangling of the data from http://www.wemjournal.org/wmsonline/?request=get-document&issn=1080-6032&volume=016&issue=02&page=0067#i1080-6032-016-02-0067-t02
I find 3 pig related occupational fatalities in the US from 1992-1997, and total US deaths at 4 from all marine animals, 2 of which were venomous from 1991 to 2001. So it looks like pigs have it, though it’s not like the difference is statistically significant.
I heard recently that when The Wizard of Oz came out, more people would have realized how dangerous it was when Dorothy fell in the pig pen. Today, we watch that movie and think it was just about her losing her balance, and maybe wonder why the farmer who saved her was so visibly upset about it. (I contacted my source and he said it was ‘just common knowledge’, and that pigs have since been domesticated from the wild boars they were, and that I should google, “pigs aggression”.)
Since when is fear only about risk of death?
I suspect similar odds hold for non-fatal injuries.
Is there some fear associated with sharks other than the danger of injury or death?
The terrifying soundtrack that accompanies them when they approach.
-- Mark Twain, excerpt from The War Prayer
And there was me thinking that The Shamen had written all that. Thanks.
It seems to me this would have been a wonderful prayer to pray before going into battle against an evil enemy like the Germans or Japanese in WW 2.
Penn Jillette
Note to self: every day, eight million things happen in New York.
I’m guessing the number comes from the population of New York city: about 8 million.
Wow, New York must be a pritty boring place to live in.
Events with million-to-one odds of happening in one day to one person happen eight times a day in New York—on average.
Hm. And I thought I was being original when I liked to say ‘billion to one odds happen 7 times a day on Earth’.
--- James Stephens
-- Isaac Asimov via Salvor Hardin, Foundation
-- Edsger W. Dijkstra
“We can get very confused, because we think that words must have some secret meaning that we have to figure out. They don’t. They are just noises or marks, and they mean whatever experience you have learned to mean by them. People tend to use similar words in similar situations, but unless you have specifically agreed on what the words will mean, in terms of underlying experiences, there’s no way to know what another person understands when you use them. The experience you attach to a word when you say it isn’t automatically the same as the experience another person attaches to the same word when hearing it.”
William T. Powers
I find this (the unspoken and un-agreed-upon array of connotations behind a word) is a major source of disagreement even on this site.
--Jonothan Coulton
Perfecting my warrior robot race,
Building them one laser gun at a time.
I will do my best to teach them
About life and what it’s worth,
I just hope that I can keep them
From destroying the Earth!
--SIAI
Interesting vid here.
Johannes Kepler
-- Benjamin Franklin
-- Garfield
The original cartoon.
“Dad, the reason I like to shop and buy things is to get rid of my money”
8 year old Cayley Landsburg, quoted in Fair Play
edit. Link added to disambiguate citation...
“Fair Play” is somewhat ambiguous a citation...
“Cayley Landsburg Fair Play” is enough, though.
Why is this interesting? Money isn’t inherently useful. Why say “than money” when “It’s amazing the things people like” will serve?
-- Han Solo
http://home.netcom.com/~rogermw2/force_skeptics.html
This page persuaded me, by the way—I am now a Force Skeptic with respect to the Star Wars universe.
That page sounded like banal propaganda. Yes, any magic is indistinguishable from sufficiently advanced technology but it sounds to me like the author has a strong preference to blaming evidence on an invisible robotic dragon in his garage rather than uncover the actual explanation whatever it may be.
This is a world where you can hear sound in space and of light is more of a guideline than an actual rule. Your real world preconceptions just don’t apply. Once there is any evidence whatsoever that Jedi are unwilling to subject the force to any scientific scrutiny then such skepticism beings to gain credibility. As things stand, however, I would expect the Jedi to be willing participants in Force research. I would, naturally, engage in such research myself. Partly out of a desire to understand the laws of the universe but mostly because I intended to harness the force to my own ends.
Rather, it sounds exactly like a humorous, ironic fan-written piece, with no intention to truthfully explain in-universe things...
… that should leave us all being highly amused Force Believers with respect to the Star Wars universe.
Sure, if you believe everything you see in the movies, but that seems like obvious Rebel propaganda to me.
Even worse, some senior imperial officers at the time of Yavin IV believed it!
From an EU perspective, that page is quite wrong, especially with assertions like
(EU introduced Force-detecting devices left over from the Jedi purges.)
or
(I think Lucas himself wrote in a blind Jedi or two.)
EU?
Expanded universe.
Yet whenever I see that, I think “European Union”. And when I first saw Star Wars fans talk about the OT, my first though was, “Old Testament”. Actually, that’s not far off, in a sense! (It’s actually “Original Trilogy”.)
ETA: A “Jew” of Star Wars would, I guess, be someone who accepts the OT, but rejects everything thereafter. There seem to be many...
Expected utility. It’s more powerful than the Force.
My initial thought was ‘Eliezer Yudkowsky’ until I realized that that would be EY and not EU… The way I assume your name is pronounced made that mistake possible.
Expanded Universe. All of the books, comics, etc outside of the movies.
Expanded Universe, probably.
-- PZ Myers
I wonder if people who upvoted this did so sincerely or as a “look how irrational elite scientists are” quote.
-- Nicolas Boileau
Rough translation: “What is well understood can be told clearly, and words to express it should come easily.”
ETA: it is worth pondering the converse; just because something rolls off the tongue doesn’t mean it’s well understood. It could be that it’s only well-rehearsed.
What the quote is aimed at is work of a supposedly high intellectual caliber, which just so happens to be couched in impenetrable jargon. Far more often, that is in fact evidence of muddled thought, not that the material is “beyond me”.
It’s obvious, but I must point out that giving the quote in the original French and providing a “rough translation” seems at odds with the message of the quote.
Why? I’m not an expert French->English translator, and I only invested a few minutes in the translation, so calling it “rough” seems appropriate. And saying something clearly in more than one language is more difficult than saying the same thing clearly in one language.
That a perfect, instant translation of a well-crafted quote by a talented French Enlightenment philosopher doesn’t just roll off my fingertips in English shouldn’t compromise the message.
Weird. I thought you’d posted it this way to be ironic. Anyway...
It compromises the message for precisely that reason. If you agree with the quote, then if you understand what it means, then it should be easy to express it clearly.
Which are you claiming: a) that I don’t understand the quote, or b) that my rough translation is unclear?
Are you perhaps supposing that “rough” and “clear” are antonyms?
I think the translation is clear enough; what makes it “rough” is that a perfect translation would feel like it was a literal translation, all the while keeping the exact nuance of the original. If you will, it is the fact of its being a translation which makes it rough.
For more on the subtleties of translation, I’ll direct you to Hofstadter’s excellent Le Ton Beau de Marot.
-Benoit Mandelbrot
While I agree, where could the earth be getting its strength from?
Also: if mathematics in contact only with mathematics becomes “less mathematical” than mathematics in contact with praxis, then how can praxis in contact with mathematics become more practical than praxis out of contact with mathematics?
If you have no mathematical techniques, you don’t know how to think about your empirical evidence.
If you have no empirical evidence, you have nothing to use your mathematical techniques on.
You need both.
Circular reasoning. One chunk pushes against the next, which pushes against the next....until you’re back where you started.
“People are not complicated. People are really very simple. What makes them appear complicated is our continual insistence on interpreting their behavior instead of discovering their goals.”
-- Bruce Gregory
James Q. Wilson, Moral Intuitions
Freeman’s case is not so clear-cut. From Skeptic Magazine:
The Trashing of Margaret Mead: How Derek Freeman Fooled Us All on an Alleged Hoax
That’s odd. The Wilson quote in aausch’s post heavily implies that Freeman spoke Samoan and Mead didn’t. But Paul Shankman’s Skeptic article says
Hmm. Wonder who’s right.
In context, the closing paragraphs of the article are also relevant:
http://lesswrong.com/lw/20y/rationality_quotes_april_2010/1v7e
I’d like to buy each of those ladies a beer.
-- http://www.ribbonfarm.com/2009/10/07/the-gervais-principle-or-the-office-according-to-the-office/
I’m interested to find that you read ribbonfarm.com, since along with lesswrong it’s one of my two most-visited blogs.
I sometimes think Venkatash’s way of thinking might be on a level above that of many of the posts here. As an engineer he seems to have internalized the scientific/rationalist way of thinking, but he’s combined that with a metaphorical/narrative/artistic way of looking at the world. When it works well, it works really well. What do other people think?
Interestingly, he has PhD in an AI-related field (specifically, control theory), but thinks the Singularity is unlikely to happen: http://www.ribbonfarm.com/2010/01/28/the-misanthropes-guide-to-the-end-of-the-world/
Another article that might contradict a common belief of this community: http://www.ribbonfarm.com/2010/09/28/learning-from-one-data-point/
Anyway, certainly a blog I’d recommend to lesswrongers.
Erm, sorry, I was just linked there or Googled there or something, don’t read it on a regular basis.
Among my favorites as well! Venkat and Eliezer’s recommendations currently dominate my reading queue, and I’d be hard-pressed to pick which of their books I’m more eagerly anticipating.
Venkat’s observations about group decision making and organizational dynamics are a big part of what made me write this proposal (which I’ve procrastinated following up on due to being uncertain how to proceed).
There’s definitely some interesting contrast between Venkat and Eliezer’s views/styles/goals. A Blogging Heads episode could be fascinating!
‘Nash equilibrium strategy’ is not necessarily synonymous to ‘optimal play’. A Nash equilibrium can define an optimum, but only as a defensive strategy against stiff competition. More specifically: Nash equilibria are hardly ever maximally exploitive. A Nash equilibrium strategy guards against any possible competition including the fiercest, and thereby tends to fail taking advantage of sub-optimum strategies followed by competitors. Achieving maximally exploitive play generally requires deviating from the Nash strategy, and allowing for defensive leaks in ones own strategy. -- Johannes Koelman
-- Jsomers.net, How to be a loser (Relevance)
“Seeing is believing, but seeing isn’t knowing.”—AronRa
-- H. Ross Perot
To take advantage of professional specialization, gains from trade, capital infrastructure, comparative advantage, and economies of scale, the way grownups do it when they actually care, I’d say that the activist is the one who pays someone else to clean up the river.
If people don’t realise that the river is dirty and that’s causing problems, changing that is valuable work by itself.
The more I read this quote the more I hate it. It is an anti-rationality quote. It says, if you are not rich enough to run as an independent Presidential candidate, if you’re not in a position to make a difference by yourself, if all the power you have is your voice, then shut up; leave action to the rich and powerful, without criticism. That your voice has power is part of the point of democracy, and it’s not hard to see why a man like Perot might prefer to make that sound less legitimate.
I doubt that was the intended meaning. He’s just encouraging you to do something. Doesn’t have to be big.
No, in the first sentence he’s explicitly denigrating those who speak up.
..for being all talk.
I can see how you might have come to your conclusion, but saying it’s “explicit” is just not true.
That doesn’t sound like an activist. That sounds like “sucker doing other people’s work for free”, which doesn’t sound like an effective plan for bringing about positive change—those people tend to “weed themselves out” over the long run.
I’m not saying you shouldn’t do things to make the world a better place, like: not litter, drive courteously, etc. (Although you should be careful about which things actually accomplish a net good.) “Be the change you want in the world” (attr. Ghandi) is a good motto to keep. I’m just saying that you shouldn’t expect major problems to get solved by Someone Else at no cost to you, nor complain about someone pointing out the dirty river instead of immediately cleaning it up.
Personally, I’m very good at discovering what’s wrong with a process or situation. I can detect flaws easily and accurately. What I’ve found I need is someone who, after I’ve done my analysis, will look me in the eye and say, “OK. So how do we fix it?”
Without that simple question, I find that far too often I stop at the identification step, shaking my head at the deplorable state of affairs.
The question analogous to to the Perot quote would be “So why don’t you fix it?”.
So for example, it would make sense for me to try and personally swoop in and free Chinese political prisoners, but if I’m not prepared to do that, I shouldn’t protest their incarceration.
I don’t think this rule leads to the right kind of behavour.
It doesn’t, and it annoys me. That makes me quite ambivalent about the quote.
How is this comment responsive to my point or supportive of the original post?
Does this work better for you?:
“The rationalist is not the man who complains about biases. The rationalist is the man who works to understand his biases.”
(coin-flipped for male)
-- Groucho Marx
It’s bad luck to be superstitious.
Marvin Minsky—The Society of Mind
-- Ben Shahn, “The Shape of Content”
--Erwin Schrodinger, Mind and Matter
-- News of the Weird (relevance)
During a conversation with a Christian friend, during which my apostasy was challenged sincerely and politely but with the usual arguments and style...
Christian: And the Bible tells us that if we have Faith as small as a mustard seed...
Me: Yeah, we can move mountains. Matthew 17:20. So, tell me. Could God make an argument so circular that even He couldn’t believe it?
Christian: Of course! He’s God, God can do anything.
‘Made in His Image’ seems to apply all too well.
You’re quoting yourself!
Excuse the cameo. I hope the extra context doesn’t distract you too much from the SMBC quote or the reply.
Marquis de Condorcet, 1794
-- Karp
“In my experience, the most staunchly held views are based on ignorance or accepted dogma, not carefully considered accumulations of facts. The more you expose the intricacies and realities of the situation, the less clear-cut things become.”
Mary Roach - from her book Spook
John Maynard Keynes
--Emanuel Derman
(quoted in Beyond AI by JoSH Hall)
“Most people are more complicated than they seem, but less complicated than they think”
BS
“Love God?” you’re in an abusive relationship.
DLC, commenter at Pharyngula.
-- Douglas Adams, The Hitchhiker’s Guide to the Galaxy
I’ve always thought you can have more fun in New York than splashing around in the water. But I’m not a dolphin.
A bit of a meta-quote:
A Philosophy of Interior Design (1990) by Stanley Abercrombie, quoting “The Sitting Position—A Question of Method” (1958) by Joseph Rykwert.
---Star Trek: Voyager, “Good Shepherd”
I really dislike the nature versus nurture false dichotomy. It grates on me to see it still taken seriously, even after the premise that our actions are shaped entirely by one or the other has been as scientifically discredited as phlogiston.
Oh, I agree with you that nature vs. nurture is a false dichotomy, but I was actually cheered to see this exchange. As terrible as it is by our epistemic standards, it’s actually quite sophisticated by Star Trek standards. (So much of what gets called science fiction is actually technology fantasy.) I was similarly cheered to see the other exchange that I posted from that episode: he actually used the word hypothesis! Real philosophy of science! On Voyager! I love it! Best episode ever!
And you can see how this is still a rationality quote despite the conceptual confusion. Janeway is trying to break through Harren’s contempt, but Harren resists her cliches and insists on (what he erroneously thinks is) accuracy.
So which of the two characters exemplifies rationalist virtues? It seems to me we’ve got one who’s trying to use clichés to “break through” to the other, and one who’s just stubbornly wrong.
...this was fiction! Star Trek moreover. Do not expect realism!
“He was a dreamer, a thinker, a speculative philosopher...or, as his wife would have it, an idiot.”
“In madness all sounds become articulate.”—“Language of the Shadows”, Nile
For the record, I didn’t downvote this below zero, but it did at one point downvote this back to zero (and did the same for the Open Thread). Not because I’d disagree with the tradition in any way, but because I don’t think the first person to get around posting the month’s thread should get tens of points of karma for simply being quick.
Rewarding people for prompt attention to housekeeping tasks seems more appropriate than punishing them.
I tend to vote such threads to around a couple of karma myself. 0 isn’t unreasonable. < 0 is quite peculiar. But my confusion was resolved in this instance. Someone messaged me and explained that he was trying to work out why the votes were fluctuating so much (4 → 0 in an hour) so was testing what would happen if he put down one more to −1.
As a side effect to these posts going up and down I’ve now started paying attention to only the last digit of the score. The 10s are mostly noise!
I certainly wouldn’t want to punish them, which is why I’d also upvote any such thread that ended up with a negative score.
--Dr. Samuel Johnson
-- Mark Chu-Carroll
There is more here: Brains as output/input devices
Although I should note that I believe there to be phenomena that qualify to be defined as ‘free will’. Specifically endogenous processes generating behavioral variability and thus non-linearity. Especially if you can show that the complexity of transformation by which a system shapes the outside environment, in which it is embedded, does trump the specific effectiveness of the environmental influence on the defined system. In other words, mind over matter. You are able to shape reality more effectively and goal-oriented and thus, in a way, overcome its crude influence it exerts on you. For example, children and some mentally handicapped people are not responsible in same the way as healthy adults. They can not give consent or enter into legally binding contracts. One of the reasons for this is that they lack control, are easily influenced by others. Healthy humans exert a higher control than children and handicapped people. You experience, or possess a greater extent of freedom proportional to the amount of influence and effectiveness of control you exert over the environment versus the environment over you. Though this definition of free will only works once you arbitrarily define a system to be an entity within an environment contrary to being the environment. Thus the neural activity, being either consciously aware and controlled by the system itself, or not, is no valid argument within this framework. Of course, in a strong philosophical sense this definition fails to address the nature of free will as we can do what we want but not chose what we want. But I think it might after all be a useful definition when it comes to science, psychology and law. It might also very well address our public understanding of being free agents.
I should have checked the lesswrong wiki before posting this. And of course read the mentioned posts here on lesswrong.com.
Anyway, for those who care or are wondering what I have been talking about I thought I should provide some background information. My above drivel is loosely based on work by Björn Brembs et al.
PLoS ONE: Order in Spontaneous Behavior
Maybe a misinterpretation on my side. But now my above comments might make a bit more sense, or at least show where I’m coming from. I learnt about this via a chat about ‘free will’.
Hope you don’t mind I post this. Maybe somebody will find it useful or informative.
There is more here: Brains as output/input devices
In my opinion you have made a rather egregious error in your evaluation of the issue of free will. You seem to have a dearly held pre-conceived notion that for anything to be established as true it must be proved in a laboratory. From which Mount Sinai did you receive this proclamation? In fact it is an article of faith.
The perception of every human being who has ever lived and is alive today tells us clearly that our actions are based on free will decisions. I can change the way I feel, I can change the way I behave by exercising my free will. I can decide what I want to think about and when I want to think about it. I can decide whether to shut off the alarm and go back to sleep or get out of bed early and do my daily exercise regimen.
if “Science” doubts the existence of free will then there is something wrong with Science not my clear perception of my free will (along with the clear perceptions of just about everyone alive and who has ever lived). It is your problem to “prove” that free will does not exist, not my problem to prove that it does exist.
One thing I do agree with; Free will is something that is beyond the material world. But of course we are involved in non-material (supernatural, spiritual, etc.) activities from the moment we wake up until the moment we go to sleep. Communicating in written language like everyone on this blog is doing is one of them. We type absolutely meaningless symbols on a screen and somehow the ideas in my head get conveyed to whoever “reads” them. Take the most advanced laboratory in the world, and have them analyze the ink on a piece of paper and the paper itself. The laboratory can tell you everything about the chemical and molecular structure of both, but it cannot hope to ever figure out the message that is written there, AND YET IT IS THERE NONETHELESS. We attach non-material ideas to meaningless symbols on a piece of paper or on a computer screen.
Whatever the conclusion here, it will not be attained by ninja’ing “everybody’s perceptions” into natural law. Everybody perceives that heavy objects fall faster than light ones. Everybody perceives that squares A and B are different colours.
Do I need to recite the whole litany? We aren’t Aristotelians or apologists or something, we don’t get to do philosophy just by sitting back in our armchairs and imagining how the world “must” obviously be. You have to actually go and look at the world. Hence our “faith” in the lab.
Perhaps I should state it in a slightly different way. There is no reason for me or anyone else to doubt the clear perception we have of our own free will. Prove to me scientifically that it does not exist, that it is some kind of illusion that all human beings experience.
Fair enough. I’m glad you agree it’s an empirical question.
Having got that concession from you, I’ll tell you I actually don’t agree with Chu-Carroll’s analysis. Of course everything is definition-dependent here, but in essence, I don’t think free will is an illusion. Rather, I think the opposition of determinism to free will is just mistaken. Determinism does not imply no free will. This position is called compatibilism.
What Chu-Carroll is saying is that free will is not some weird force outside physics that thaumaturgically makes an electron zig instead of zag, causing the miracle of choice. I agree up to there. So if that’s what you call free will, then it is an illusion. He then implies “there is no free will.” Indeed, not under that definition.
But that’s not what I call free will.
See Daniel Dennett’s “Elbow Room;” also search this site for “Free Will.” Eliezer has done some excellent writing on the subject.
Something is either determined or it is undetermined, to some degree random. We can make sense of no third option. The free will you want is apparently not compatible with determinism (too bad, mine is). But the free will you want is also not random- how could we be held responsible for a random event? Could a flip of a coin be what determines whether you act wrongly or rightly?
You ask for something that is neither determined nor undetermined and such a thing is impossible on pain of sacrificing the basic concepts we use to understand the world.
Of course we can. But that isn’t evidence of the sort of free will you’re talking about. I can do whatever I want. It just so happens that what I want is causally determined. That’s okay, almost everything is causally determined. The kind of free will you’re talking about isn’t even magic. At least everything Harry Potter, Santa Clause and Jesus do is conceptually coherent. Jesus didn’t turn water into square circles!
Can you say a bit more about that? I dislike disturbing other’s basic concepts of world-understanding without good reason, but my basic categories are simply threefold: it seems basic to me that events can be caused, chosen, or random.
To hear that it is impossible for something to be non-caused and non-random at the same time feels to me a bit like hearing that it’s impossible for a color with a pure hue to be non-red and non-green; the answer is simply that the color must be blue.
Can you give a specific, meaningful definition of “choice” that cannot be reduced to causality, randomness, or some mixture thereof? I’ve tried, and I really can’t think of any that hold up under scrutiny. Most of my attempts (from back when this seemed like a serious problem to me) turned out to be meaningless or circular definitions, or homunculi. Even if, for example, you posit an extraphysical soul that controls one’s body like a puppet, then the question of “How does the soul make its choices?” is still meaningful — a given choice can either be for a reason or for no reason — and there would in theory be a right answer, even if there we somehow could never find out what it was (and we probably could anyway, by studying behavior and reverse-engineering general principles: the same thing science always does, even if it cannot directly study the underlying mechanism).
No, I can’t. I think that, in practice, the three categories are hopelessly interwoven. You should be able to “reduce” any event into any of the three categories.
Take the thing about the wallet. I can focus on the choice I made, and say it was my choice that mattered; I could have chosen differently, and but for my choice, the wallet would not have been returned. I can focus on the randomness, and say it was the randomness that mattered; I happened to win a small prize on a scratch-off lottery earlier that day, and but for that randomness, the wallet would not have been returned. I can focus on the casuation, and say that it was the prior conditions that mattered; a perfect computer could have known that I had brain-state X immediately before returning the wallet; and brain-state X is sufficient to induce wallet returns.
Well, why did I have brain-state X? It could be because I had brain-state Y a moment ago, which is a sufficient cause of X, or it could be because the electrons in some of my neurons randomly happened to be in the right place at the right time, or it could be because I chose to look down at the ground and see what was there.
And so on—any given event can be explained in any of three different ways; when it comes to anything as complex as the human brain, the pure types exist only in our imaginations.
I assume that there is some line you would draw, be it at individual neurons, or cellular structures, or molecules, or individual atoms, beyond which you would say, “these types of things don’t make choices.” If so, how would you reply to the question: in what sense can a system built out of components that don’t make choices be said to make choices?
[grin] Cyan, I actually do believe that even subatomic particles, in some limited sense, can be said to make choices. I do agree with you that choice can’t arise out of choiceless components.
If you’re curious about what it might mean for an electron to make a choice, and you have a high tolerance for whimsy, I recommend the book Reenchantment without Supernaturalism. The philosophy isn’t very rigorous, and it’s rather out of date, but it’s the only one I know of that (a) takes physics seriously, (b) takes logic seriously, and (c) accounts for my intuition that I make choices without (d) dismissing that intuition as an illusion.
Quantum amplitudes evolve deterministically, and it’s generally held that quantum systems either decohere deterministically or collapse randomly. How does this permit subatomic particles to be said to make choices, even in a limited sense?
Is there a more easily accessible explanation of the argument in the book? (I’m not going to shell out for it.)
Not that I know of, although it might be in a good public library. Sorry about that; I know it’s unfair to ask you to look in an obscure book to get the basic drift of a contrarian argument; that sort of thing is rarely worth people’s time. If you happen to live near Boston, Miami, or San Francisco, which are places I’ll be over the next few months, then I’ll be happy to lend you my copy.
I don’t know nearly enough quantum physics to give you an intelligent answer to your question about amplitudes and systems in my own words. Again, sorry about that.
Nope. Oh well, at least the exchange served to clarify that you really do consider choice ontologically fundamental, and not just a useful category for practical purposes.
You might have to answer ata’s question before I can say more. But try this. Take a stock example of a free choice. Smith finds a wallet on the ground. He decides to return it. Why did he return it? Now maybe Smith didn’t choose to return it, maybe someone made him do it by hypnotizing him. This is obviously a causal event, presumably we can agree that Smith isn’t using libertarian free will here. But maybe he did choose to return it. Even then though we don’t just stop wonder “why?”. When we ask for an explanation of behavior “I chose to” isn’t enough. Usually we ask for reasons. Reasons are significant explanations for actions because they appear to be morally significant. We don’t usually hold drugged or (were it to happen) mind-controlled people responsible for their actions to the same degree people he choose to act based on reasons. But reasons are still causal explanations. Were returning the wallet not the moral thing to do (say the wallet had proof the owner had committed a heinous crime) then the Smith would have no returned the wallet. Explanations based on reasons can be reduced to sentences about neurons and biochemistry the same way any causal explanation can. The only explanation that I can think of that wouldn’t be a causal explanation is if the choice to return the wallet just happened. There was no reason, no cause. It just happened. It could have not happened but in did. Thats what we mean by undetermined or random. There is a middle ground, some things can be more likely than other things. Some philosophers have tried to argue that brains are sensitive enough to quantum level indeterminacy that our actions cannot be perfectly predicted even in principle. But even if this is why Smith returned the wallet it obviously doesn’t give Smith the kind of free will libertarians want.
I just don’t know what this other thing could be. Determined and indeterminate appear to exhaust the possibility space. Philosophers have tried for a very long time to offer something else up but they inevitably fail.
Just so. If I choose to return a wallet, that is just a brute fact about the universe. You should feel free to ask the question “Why?” about a pure choice, but I cannot think of any good answers to it. The only way you can get any analytical traction over choices is if you have more than one choice, or if the choices are mixed in with some randomness or some causality.
Your description is necessary for X to be random, but not sufficient for X to be random. Your description, to me, is just another way of saying that X is not caused. If you think that all things are either caused or random, then you will naturally conclude that a non-caused thing is random. If, however, you start off thinking that all things are caused, random, or chosen, then you will naturally conclude that a non-caused thing is either random or chosen—it will not occur to you to assume that all non-caused things must be random.
Conversely, what I mean by random is slightly more specific than what you mean by random. When you say “random,” you just mean that something is not caused. When I say “random,” I mean that something is not caused AND not chosen.
Randomness is a brute fact about the universe. We are always probing to find out what really underlies a probability distribution, but whenever we pause in our labors and treat a distribution as if it were actually driven by randomness, there is nothing interesting that we can say about that randomness. Why did the coin come up heads instead of tails? I have no idea. I can wave my hand in the direction of atmospheric physics and conservation of angular momentum, but we both know that I don’t have anywhere near enough information or computing power to actually calculate the path of the coin through the air.
Similarly, choice is a brute fact about the universe. When you ask me why I returned the wallet, I can wave my hand in the direction of ethics and psychology, but we both know that I don’t have anywhere near enough self-understanding or computing power to actually calculate the roots of my decision in my past.
The only difference between the coin and the wallet is that I controlled the path of the wallet in a way I did not control the path of the coin. They are both “random” in the sense of being un-caused, but one is “nonrandom” in the sense of being chosen.
So I made this particular argument partly because it’s one of my favorites, succinct and devastating but also because going into what we know about brain science with the troll above would be useless. But lets go into it for a minute. We’re far from understanding everything about the brain but we know a lot and have good reason to think that everything we do is the product of neuron firings. How the brain works (making choices and otherwise) can be complicated but it is almost definitely causal. As I said some have tried to show that they are random… but what is the third way brains could be? If choice is basic you’re saying it can’t be reduced to neuron firings, but everything we know about the brain suggests it can. Where is choice in the objective realm, the world of particles and forces?
Well in Quantum mechanics it might be but
The fact that your or I are not capable of calculating the path of a flipping coin doesn’t make randomness basic. The fact that in principle we could calculate it given enough information and brain power proves that the path of a coin is actually causally determined. My mention of a coin flip in my original comment was metaphorical.
Your ignorance is not cause for postulating fundamental ontologies. It is a mysterious answer. If we knew more about ethics and psychology you absolutely could do this calculation. Besides which, in many circumstances you know exactly why someone made a choice. I don’t need to know anything about psychology or brains to know that someone who returns a wallet is probably doing so because they are being ethical or hoping for a reward. These are obvious reasons for choosing to return a wallet.
Also please distinguish choice from randomness. I’m still not quite sure what this concept consists of. How can something be uncaused but not random? If there is no cause what constrains the outcome? If there is no constraint on the outcome how is it not random?
Neurologically speaking, I’ll admit that you could be right. It could turn out to be the case that humans happen to be a sort of creature that makes decisions based on neuron firings, and that neuron firings are in turn based entirely on deterministic particle collisions plus a bit of quantum randomness on the side. I keep half an eye on the neuroscience articles in the popular press, and if and when they report that conclusion, I’ll take it seriously. It would force me to revise several of my core beliefs. One belief that wouldn’t change, though, is the belief that, as a matter of philosophy, there’s nothing wrong with the idea of a choosing being, even if we have no real life examples of choosing beings on Earth.
I’ve read and re-read the page on mysterious answers, and I think it’s a great article. I don’t mean to shut off inquiry at all by saying that I choose things; by all means, scan my brain and tell me what you see! I’ll be mildly curious. I’m not curious enough to do the scans myself, because I’m too busy trying to investigate my mind at the level of macrophenomena like “habits,” “willpower,” and “awareness” to bother much with the microfoundations of those phenomena in individual neurons or clusters. I expect that within my lifetime, brain scans will be cheap, safe, and precise enough that I’ll be able to get better information from physical science than from introspection, and then I’ll switch the bulk of my investigative activity over to physical science. In the meantime, I find that “choice” works just fine as a placeholder in the heuristic equations I use to model my mental macrophenomena. It may or may not correspond to anything real at the quantum level, but it helps me understand myself, so I’m using it.
I’m sorry; I’m not sure what else to say. if my analogy about red/blue/green and my necessary vs. sufficient paragraph didn’t get the point across, then I don’t know how else to explain it. if you insist on using a mental model that has only two possible values for a variable, then talk about a third value will not make any sense to you; I cannot stop you from trying to explain the third value in terms of your existing mental model and then getting confused or annoyed when it doesn’t work.
But this is what is so great about Bayesian epistemology. You don’t have to wait for some neuroscientists to announce this finding. If you know a decent amount of neuroscience now, you can be fairly confident in predicting that they one day will be able to explain choice in terms of neuron firings. All the people here who believe this aren’t just making it up. We’re extrapolating from what is known and making reasonable inferences. If you wait for someone to figure out exactly how it is done you’re going to spend a lot more time being wrong than those who infer in advance. Again though, I can already make accurate predictions about people’s choices based on macro-phenomena.
But don’t confuse placeholders with fundamental properties. I have no problem with “choice”. I use it all the time. I think I make choices constantly. If it is helpful for your models by all means use it. But that doesn’t require you to assert that choice is some incredible new kind of event which is neither causal nor random. I have lots of things in my ontology that are not in my basic ontology: morality, love, basketball etc. Maybe in modeling subjective experience you even want to distinguish things you do from other caused or random events and so use this word “choice” in a special way. But surely you can recognize that you aren’t actually that different from all the other objects you discover in the world and likely work the same way they do. And when you take this objective, view from nowhere, scientific perspective I don’t see how you can have an event that is neither caused nor random.
I quite understand that your “choice” is neither caused or random but a third value that is neither. What I don’t understand is what positive qualities this third value possesses. I promise you I can make sense of their being a third variable in abstract. What I don’t understand is what your third variable is. I can say lots of things about “random” and “causally determined” that distinguish these properties. But I haven’t heard you do anything in the way of describing this third property.
You should know that you’re hardly the first person who has wanted this kind of free will and went about inventing a third kind of thing to prove that it existed. One reason I’m so skeptical is that every single one of these attempts that I know of has failed miserably. Libertarians are a very small minority among contemporary analytic philosophers for a reason.
OK, good, I thought so. You seemed pretty smart.
Why don’t you go ahead and do that, for a paragraph or so, and I’ll see if I can complete the pattern for you and give you the kind of description you’re looking for. To me it just seems obvious what a choice is, in the same way that I know what “truth” is and what “good” is, but if you can manage to describe the meaning of “random” analytically then I can probably copy it for the word “chosen.” If I can’t, that will surprise me.
Have I waxed poetic about souls and destiny and homunculi? I don’t remember “inventing” a third kind of thing. I’m just sort of pointing at my experience of choice and labeling it “choice.” If you insist that what I think is choice is really something else, you’re welcome to prove it to me with direct evidence, but I’m not really interested in Bayesian inferences here. I am unconvinced that brains and rocks are in the same reference class. I do not accept the physicalist-reductionist hypothesis as literally true, despite its excellent track record at producing useful models for predicting the future. I understand that the vast majority of people on this site -do- accept that hypothesis. I do not have the stamina or inclination to hold the field on that issue against an entire community of intelligent debaters.
It’s obvious to you what “truth” is and what “goodness” is? Really? I think I can say clever and right things about these concepts because I’ve done a lot of studying and thinking. But the answers don’t seem obvious at all to me. Anyway, causality and randomness. Clearly huge topics about which lots have been said.
I believe a causal event is a kind of regularity, extended in spacetime, which has a variable that can be manipulated by hypothetical agent at one end to control a variable at the other end (usually the effect part is later in time). So by altering the velocity of an asteroid, the mean temperature of the planet Earth can be dramatically altered, for example. On a micro-level, intervening on a neuron and causing it to fire at a certain rate will lead to adjacent neurons firing. Altering the social mores of a society can cause a man not to return a wallet. For any one event to occur a large amount of variable have to be right and any one of those variable can be altered so as to alter the event, so these simple examples are overly simple. Lots more has been said if you’re interested. Pearl and Woodward are good authors.
Randomness might be more difficult since it isn’t obvious ontological randomness even exists. Epistemological randomness does: rolling a dice is a good example we have no way to predict the outcome. But in principle we could predict the outcome. Some interpretations of quantum mechanics do involve ontological randomness. Such events can be distinguished from causal events in that the valuable of the resulting variable cannot be controlled by any agent, not because no agent is powerful enough but because there are no variables which can be intervened on to alter the outcome in the way desired. There is no possibility of controlling such events. It is possible quantum indeterminacy is just the product of a hidden variable we don’t know about or that the apparent randomness is actually just a product of anthropics, every possible state gets observed and every outcome seems random because “you” only get to observe one and can’t communicate with the other “you’s”.
I don’t have a problem with you pointing at an experience and labeling it “choice”. I do that too. You make choices. It’s just what it is to make a choice is one of these two things, a caused event or an uncaused event. You invent a third kind of thing when you come up with with a new kind of event which isn’t seen anywhere else, and declare it to be fundamental. And the way many philosophers have historically dealt with this exact problem is by positing souls and homunculi, “agent causation” and whatnot. When you decide that your experience of choice is a fundamental feature of the world you’re doing the exact same thing- any claim that something is irreducible is the same as a claim that something belongs in our basic ontology. The fact that you didn’t do this in verse just means I’m not annoyed, it’s still the same mistake.
I’ve been known to be more tolerant that others of unorthodoxy on this matter and I doubt many more would join in. Most people probably have the same arguments anyway. You’re not obligated to but I’d be interested in hearing your reasons for not accepting the hypothesis. However, my definition of truth is something like “the limit of useful modeling” so we might have to sort truth out a bit too. If you preface the discussion to demonstrate that you’re aware the position is unpopular already and you’re just trying to work this out you can probably avoid a karma hit. I’ll vote you up it it happens.
Sure, consider it prefaced. I’m not trying to convince anybody; I’m just sharing my views because one or two users seem curious about them, and because I might learn something this way. It’s not very important to me. If anyone would like me to stop talking about this topic on Less Wrong, feel free to say so explicitly, and I will be glad to oblige you.
I don’t mean that the entire contents, in detail, of what is and is not inside the box marked “true” is known to me. That would be ridiculous. I just mean that I know which box I’m talking about, and so do you. Sophisticated discussions about what “true” means (as opposed to discussion about whether some specific claim X is true) generally do more harm than good. You can tell cute stories about The Simple Truth, and that may help startle some philosophers into realizing where they’ve gone off-course, but mostly you’re just lending a little color to the Reflexive Property or the Identity Property: a = a.
I can probably work with this. I expect you will still think I’m postulating unnecessary ontological entities, and, given your epistemological value system, you’ll be right. Still, maybe the details will interest you.
Some interpretations of conscious awareness do involve ontological choice. Such events can be distinguished from random events in that the value of the resulting variable can be controlled by exactly one agent, as opposed to zero agents, as in the case of a truly random variable. The agent in question could be taken to be some subset of the neurons in the brain, or some subset of a person’s conscious awareness, or some kind of minimally intervening deity. It is not clear exactly who or what the agent is.
Conscious events can be distinguished from caused events in that conventional measures of kinetic power and information-theoretic power are bad predictors of a hypothetical agent’s ability to manipulate the outcome of a conscious event. Whether because the relevant interactions among neurons, given their level of chaotic complexity, occur in a slice of spacetime that is small enough to be resistant to external computation, or because the event is driven by some process outside the well-understood laws of physics, a conscious event is difficult or impossible to control from outside the relevant consciousness. Thus, instead of a single output depending subtly on many other variables, the output depends almost exclusively on a single input or small set of inputs.
I’d be happy to explain it in August, when I’ll be bored silly. At the moment, I’m pretty busy with my law school thesis, which is on antitrust law and has little to do with either free will or reductionism. Feel free to comment on any of my posts around that time, or to send your contact info to zelinsky a t gm ail dot com. Zelinsky is a rationalist friend of mine who agrees with you and only knows one person who thinks like me, so he’ll know who it’s for.
Thanks for bearing with me so far and for responding to arguments that must no doubt strike you as woefully unenlightened with a healthy measure of respect and patience. I really am done with both the free will discussion and the reductionist discussion for now, but I enjoyed discussing them with you, and consider it well worth the karma I ‘spent’. If you can think of any ways that what you see as my misunderstanding of free will or reductionism is likely to interfere with my attempts to help refine LW’s understanding of Goodhart’s Law, please let me know, and I’ll vote them up.
Why? How an algorithm feels is not a reliable indicator of its internal structure.
For convenience. If you show me a few examples where believing that I don’t have free will helps me get what I want, I might start caring about the actual structure of my mental algorithms as seen from the outside.
It is beneficial to believe you don’t have free will if you don’t have free will. From Surely You’re Joking, Mr. Feynman!:
All right, suppose all that is true, and that people can be hypnotized so that they literally can’t break away from the hypnotizing effect until released by the hypnotist.
That suggests that I should believe that hypnotism is dangerous. It would be useful to be aware of this danger so that I can avoid being manipulated by a malicious hypnotist, since it turns out that what appears to be parlor tricks are actually mind control. Great.
But, if I understand it correctly, which I’m not sure that I do, a world without free will is like a world where we are always hypnotized.
Once you’re under the hypnotist’s spell, it doesn’t do any good to realize that you have no free will. You’re still stuck. You will still get burned or embarrassed if the hypnotist wants to burn you.
So if I’m already under the “hypnotist’s” spell, in a Universe where the hypnotist is just an impersonal combination of an alien evolution process and preset physical constants, why would I want to know that? What good would the information do me?
I’m sorry, I’m not maintaining that free will is incompatible with determinism, only that sometimes free will is not present, even though it appears to be. When hypnotized, Richard Feynman did not have (or, possibly, had to a greatly reduced extent) free will in the sense that he had free will under normal circumstances—and yet subjectively he noticed no difference.
It appears to me that you created your bottom line from observing your subjective impression of free will. I suggest that you strike out the entire edifice you built on these data—it is built on sand, not stone.
I see; I did misunderstand, but I think I get your point now. You’re not claiming that if only Mr. Feynman had known about the limits of free will he could have avoided a burn; you’re saying that, like all good rationalists everywhere, I should only want to believe true things, and it is unlikely that “I have free will” is a true thing, because sometimes smart people think that and turn out to be wrong.
Well, OK, fair enough, but it turns out that I get a lot of utility out of believing that I have free will. I’m happy to set aside that belief if there’s some specific reason why the belief is likely to harm me or stop me from getting what I want. One of the things I want is to never believe a logically inconsistent set of facts, and one of the things I want is to never ignore the appropriately validated direct evidence of my senses. That’s still not enough, though, to get me to “don’t believe things that have a low Bayesian prior and little or no supporting evidence.” I don’t get any utility out of being a Bayesianist per se; worshipping Bayes is just a means to an end for me, and I can’t find the end when it comes to rejecting the hypothesis of free will.
Robin, I’ve liked your comments both on this thread and others that we’ve had, but I can’t afford to continue the discussion any time soon—I need to get back to my thesis, which is due in a couple of weeks. Feel free to get in the last word; I’ll read it and think about it, but I won’t respond.
Understood.
My last word, as you have been so generous as to give it to me, is that I actually do think you have free will. I believe you are wrong about what it is made of, just as the pre-classical Greeks were wrong about the shape of the Earth, but I don’t disagree that you have it.
Good luck on your thesis—I won’t distract you any more.
I place a very low probability on my having genuine ‘free will’ but I act as if I do because if I don’t it doesn’t matter what I do. It also seems to me that people who accept nihilism have life outcomes that I do not desire to share and so the expected utility of acting as if I have free will is high even absent my previous argument. It’s a bit of a Pascal’s Wager.
Why do you define “free will” to refer to something that does not exist, when the thing which does exist—will unconstrained by circumstance or compulsion—is useful to refer to? For one, its absence is one indicator of an invalid contract.
I’m not exactly sure what you’re accusing me of. I think Freedom Evolves is about the best exposition of how I conceive of free will. I am also a libertarian. I find it personally useful to believe in free will irrespective of arguments about determinism and I think we should have political systems that assume free will. I still have some mental gymnastics to perform to reconcile a deterministic material universe with my own personal intuitive conception of free will but I don’t think that really matters.
I’m confused. I haven’t read Freedom Evolves but Dennet is a compatiblist, afaik.
I think you’re saying you’re a compatibilist but act as if libertarianism were true, but I’m not sure.
I don’t really understand what you mean when you use the word ‘libertarian’ - it doesn’t seem particularly related to my understanding. I mean it in the political sense. Perhaps there is a philosophical sense that you are using?
Libertarian is the name for someone who believes free will exists and that free will is incompatible with determinism. Lol, it didn’t even occur to me you could be talking about politics.
I swear, if there ever exists a Less Wrong drinking game, “naming collision” would be at least “finish the glass”.
Ok, I’ve done some googling and think I understand what you meant when you used the word. I’d never heard it in that context before. I guess philosophically I’m something like a compatibilist then, but I’m more of an ’it’s largely irrelevant’ist.
I see. The word “genuine” is important, then—a nod to the “wretched subterfuge” attitude toward compatibilist free will. I withdraw my implications.
(I read Elbow Room, myself.)
No! A world without libertarian free will is a world exactly like this one.
ETA: Robin’s point, I gather, is that a world without libertarian free will is a world where hypnotism is possible. Which, as it turns out, is this world.
I was actually making a lesser point: that the introspective appearance of free will is not even a reliable indicator of the presence of free will, much less a reliable guide to the nature of free will.
Edit: From which your interpretation follows, I suppose.
(An aside: this sort of view is common in the free will dialectic—it is an incompatibilist theory, probably of the non-causal type. Jack’s objection is a standard one, and for good reason, I believe.)
Supernatural free will doesn’t exist. If your concept of free will requires it to be supernatural (mine doesn’t) and you are sure you experience free will clearly there is something wrong with either your concept or your perception.
Watch this: The Relativity of Wrong—http://www.youtube.com/watch?v=2tcOi9a3-B0
First of all, you cannot think when and about what you want. Same as you cannot want what you want. This leads to an infinite recursion. No system can understand itself for that the very understanding would evade itself forever. A bin trying to contain itself.
Anyway, there is no reason to go beyond physical, factual inquiry right now. You can assess your data with practicability. If a drug makes you think that you can fly you can jump from the next bridge and be brought back down to earth by reality.
Reality is not subject to interpretation, only description and abstraction. The fundamental nature of reality, its characteristics and qualities are absolute. Red is always red even when you call it green.
You don’t have to have a comprehensive grasp to determine the simplest of all conclusions: Something exists or it doesn’t. Either something is tangible or it doesn’t exist. My definition of subsistence is for something to have an influence on me. This also implies that something that exists can be subject to scientific inquiry. Something that exists can be assessed with practicability. It makes a difference.
If I cut my throat I may discover that I was dreaming or that I have been playing some advanced virtual reality game all along. Everything is possible. But right now there are safer and more promising options of gaining knowledge. How can I be sure? I can’t, but there is evidence which proved to be reliable so far. I have to suspect that it will continue to be reliable based on experiment and observation. That doesn’t make it the ultimate way of knowing or even a superior way but the best I know of at this time. And until I hit some hard barrier I do not have any good reason to try something else.
[Connections to rationality: Focus, taking action, and conversation style.]
---Star Trek: Voyager, “Good Shepherd”
“To know something is to make this something that I know myself; but to avail myself of it, to dominate it, it has to remain distinct from myself.”—Miguel de Unamuno
--Jean-Luc Picard
-Michael Anissimov
Don’t tolerate intolerance. -bumper sticker
— Bob Dylan, “Desolation Row”
Believe those who are seeking the truth. Doubt those who find it.
No, curiosity seeks to annihilate itself.
A man with one watch knows what time it is; a man with two watches is never quite sure.
A man with one watch might have the wrong time; a man with two watches is more aware of his own ignorance.
The only problem with this quote is that if I have two watches and they have the same time on them, maybe I synchronized them at some point, then that would seem to make me more confident about what time it actually was, given that if I have a single watch, the battery could be dying and the watch tick a little slower. Or maybe I’m thinking too much.
I always liked this quote. I think I originally saw it in a Robert Anton Wilson book. I usually use it in the context of terminal values.
-- Steven E. Landsburg
I see this is being downvoted badly. I got it. Anyway, for those interested in the nature of reality, check out the discussion about the above quote here: http://www.thebigquestions.com/2009/12/17/non-simple-arithmetic/
I’d delete it here, but since there are comments referring to it, I won’t.
Upvoted for getting it.
Good on you for not deleting it. I have it downvoted, but I just upvoted this comment to compensate.
The derivation of (yes, incomplete, but useful) arithmetic from basic axioms, or the derivation (in another sense) of reasonably reliable arithmetic from our evolved intuition, is a perfect example of complexity arising from simplicity. There’s no comparison.
And in a more abstract sense — the transuniversal truth of arithmetic, not the practical discovery or application of it, nor any attempts to formalize it — there’s nothing to “arise” at all.
In any case, the “Therefore Dawkins and his opponents are equally wrong” sounds like a non-sequitur. A more understandable conclusion would be “I am wrong about the implications of the beliefs of Dawkins and his opponents.” He basically says “If you believe in Intelligent Design, you must believe that God decided that 2+2 would equal 4. You don’t believe that, therefore your belief system is inconsistent or you are a hypocrite. If you believe in evolution by natural selection, you must believe that 2+2 evolved to equal 4. You don’t believe that, therefore your belief system is inconsistent or you are a hypocrite.” He’s just making up new beliefs, ascribing them to his opponents, and pointing out their ridiculousness.
Ah ok, I guess I shouldn’t have posted this on lesswrong.com—Really, it’s a book that muses about the possibility of a mathematical universe. Mind is biology. Biology is chemistry. Chemistry is physics. Physics being math. Mind perceives math, thus the universe exists physically. Erase the “baggage” and all that’s left is math. It explicitly states that all assertions are pure speculation, philosophical thought, not science. I think it’s a very beautiful idea. This quote thus might be a bit out of context.
More info here: http://www.marginalrevolution.com/marginalrevolution/2009/11/the-big-questions.html
So please don’t judge the book by my quote here. Wasn’t my intention.
If you’ve read Permutation City by Greg Egan, this is musing about it being real.
Gary Drescher’s “Good and Real” is an example of this sort of Deep Book done right. Landsburg seems to make a lot more errors—like he tried to write Good and Real but failed.
I’ve ordered Good & Real when I heard you mentioning it during the video Q&A. Hasn’t arrived yet though, few weeks delivery time...thanks though.
This is horribly, horribly wrong, and I talked about it on an Open Thread here.
I continued my critique on my blog, which drew Landsburg out of the workwork and had a back-and-forth with him, which continued onto his blog. He did follow-up posts here and here, but I haven’t replied much further on those, because I was really starting to get caught up in “someone is wrong on the internet” syndrome.
Anyway, here’s what’s wrong (if you don’t want to read the links): there is no consistent definition of terms that makes Landsburg right. After a lot of critique, the error turns out to hinge on the meaning of “exist”. Put simply, math doesn’t exist—not in the same sense that e.g. biological organisms exist, which meaning Dawkins is using there.
Basically, Landsburg is positing the existence of a Platonic realm of math that is always “out-there”, existing. This is a major map-territory confusion, and should be a warning to rationalists. It’s a confusion of human use of math, with the things that can be described in the language of math. The only way he supports this position is by rhetorical bullying: “come on, you don’t really think the numbers didn’t exist before humans, do you?” And leads him into deeper confusions, like believing that we “directly perceive” mathematical truths and that they can tell us—by themselves—useful things about the world. (The latter is false because you always require the additional knowledge “and this phenomenon behaves in way isomorphic to these mathematical expressions”, which requires interacting with the phenomenon, not just Platonic symbol manipulation.)
(Note that everything he claims is true and special about math, theists claim about God, but this post is already too long to elaborate.)
The only sense Landsburg is right is this: it has always been the case that if-counterfactually someone set up a physical system with an isomorphism to the laws of math, performed operations, and then re-interpreted according that same isomorphism, it would match up with that that follows from the rules and axioms of math.
But Dawkins’s claim doesn’t deny that at all; he’s claiming that populations of organisms evolved, not that “the counterfactual mathematical expression of evolution’s working” evolved, the latter of which would indeed be in contradition of the previous paragraph.
I agree; but interestingly, that doesn’t imply that mathematical Platonism is false. I’m becoming more and more convinced that the universe is a relatively simple mathematical object, and that this universe existing is a special case of all mathematical objects existing.
My mistake was to post this here. I haven’t thought about it, just shared what I found interesting. I just found that quote thought-provoking. I was never going to consider his musings as factual or truth-claims. As far as I can tell, he doesn’t either. It’s metaphysics. It is appealing on an poetic level.
On a side note, I’m not sure what Dawkins believes, but I know that he does indeed extent the concept of biological evolution to encompass other systems like culture and thought. If I remember right, he actually thinks it is an universal phenomena that even applies to physics.
http://www.universaldarwinism.com/Dawkins%20Richard.htm
Richard Dawkins—Applying Darwinian Evolution to Physics: http://www.youtube.com/watch?v=lRTfgQG5Yn8
I don’t think it was so bad to post something that spurred such a discussion. It wasn’t off-topic, just wrong (or Not Even Wrong) in an argument-worthy way.
Normally, I think karma is a reasonably good guide to writing here, but in this case I find myself both approving of the fact that you posted this, and approving of the massive downvoting (which I added to). Odd.
Thanks, as I wrote here, I’ll just listen and learn now. Next time I’ll hopefully think about it, if I might actually be able to add something valuable. This is certainly not the usual forum where some uneducated guy like me can just chat about anything he might not even understand properly. Which is very good, less noise.
I’m happy the original quote at least caused some debate.
Okay, but just to be clear, those are still different from believing in “evolution” of the truth of the counterfactuals I described. Yes, the thoughts held by people evolve, but that’s not the same thing as believing that “the possibility of biological evolution in a counterfactual sense” evolved.
I’m not sure what you are on about. I’ll read up on the links you provided tomorrow. So bear with me. As I understand it, the whole point is that the laws of physics, or rather what gives rise to them, is equal to the structure we describe as and by the use of mathematics. Our interpretation of these patterns as physics, or living things, are just necessary abstractions drawn by our minds. The territory really is math, our map are the things.
Um, any thoughts on reading the past exchanges?
I’m serious on this. I apologize for my naivety in thinking I could participate in such a discussion and for posting this quote in the first place. Reading some of your exchanges, and especially the one between Splat and Steven Landsburg (03 February 2010), opened my eyes about how little I really know and that I’m completely unable to judge any claims being made regarding this topic. Most of it is indeed far over my head. I’ll retreat to further studying, educating myself, and listen and learn what you people have to say. So please ignore my previous comments.
I’m happy that the original quote at least caused some, hopefully enlightening, debate.
The more I read that, the less sense it makes. Are we to conclude that Dawkins is as wrong about evolution as the Intelligent Design proponents? Is there the slightest reason to think that whatever source Landsburg is paraphrasing as “for Dawkins, complexity can arise only from simplicity”, Dawkins had anything but evolution in mind? What has the ontological status of arithmetic to do with how present-day lifeforms came to be? What does any of this have to do with rationality?
Landsburg does not doubt biological evolution. It’s just an argument about complexity being inherent in the laws of nature, reality. And what it has to do with rationality, it’s thought provoking. And rationality is a means to an end in succeeding to reach your goals. If your goal is to fathom the nature of reality these thoughts are valid as they add to the pile of possibilities being worthy of consideration in this regard.
His thoughts on that are confused too. He claims that math is fundamental to physics, but also that it’s infinitely complex. That doesn’t work:
1) Math is simple in the sense that you need very little space to specify the entities needed to use it.
2) But Landsburg says it’s complex because you haven’t really specified it until you know every mathematical truth.
3) But then physics isn’t using math by that definition! It’s using a tiny, computable, non-complex subset of that.
(This is discussed at length in the links I gave.)
Thought-provoking is good, but don’t fall for the trap of worshipping someone for saying stuff that doesn’t make sense.
I’m not sure “thought-provoking” is actually a good thing any more than “reflectively coherent” is a good thing. “Thought-provoking” is just a promise of future benefit from understanding; that promise is often broken.
Then what are Dawkins and his opponents “equally wrong” about? What does it mean to say that complexity is “inherent in the laws of nature”? Or that it isn’t? What does Landsburg mean by “complexity”? Is arithmetic “complex” because it contains deep truths, or is it “simple” because it can be captured in a small set of axioms?
I have yet to understand what is being claimed here.
RichardKennaway:
Ariithmetic is complex because it can not be captured in a small set of axioms. More precisely, it cannot be specified by any (small or large) set of axioms, because any set of (true) axioms about arithmetic applies equally well to other structures that are not arithmetic. Your favorite set of axioms fails to specify arithmetic in the same way that the statement “bricks are rectangular” fails to specify bricks; there are lots of other things that are also rectangular.
This is not true, for example, of euclidean geometry, which can be specified by a set of axioms.
Silas Barta’s remarks notwithstanding, the question of which truths we can know has nothing to do with this; we can never know all the truths of euclidean geometry, but we can still specify euclidean geometry via a set of axioms. Not so for arithmetic.
Here we go again.
Then the universe doesn’t use that arithmetic in implementing physics, and it doesn’t have the significance you claim it does. Like I said just above, it uses the kind of arithmetic that can be captured in a small set of axioms. And like I said in our many exchanges, it’s true that modern computers can’t answer every question about the natural numbers, but they don’t need to. Neither does the universe.
Yes, but you only need finite space to specify bricks well enough to get the desired functionality of bricks. Your argument would imply that bricks are infinitely complex because we don’t have a finite procedure for determining where an arbitrary object “really” is a brick, because of e.g. all the borderline cases. (“Do the stones in a stone wall count as bricks?”)
Then the universe doesn’t use that arithmetic in implementing physics,
How do you know?
Like I said just above, it uses the kind of arithmetic that can be captured in a small set of axioms.
What kind of arithmetic is that? It would have to be a kind of arithmetic to which Godel’s and Tarski’s theorems don’t apply, so it must be very different indeed from any arithmetic I’ve ever heard of.
Mainly from the computability of the laws of physics.
Right—meaning the universe doesn’t use arithmetic (as you’ve defined it). You’re getting tripped up on the symbol “arithmetic”, for which you keep shifting meanings. Just focus on the substance of what you mean by arithmetic: Does the universe need that to work? No, it does not. Do computers need to completely specify that arithmetic to work? No, they do not.
By the way:
1) To quote someone here, use the greater-than symbol before the quoted paragraph, as described in the help link below the entry field for a comment.
2) One should be cautious about modding down someone one is a direct argument with, as that tends to compromise one’s judgment. I have not voted you down, though if I were a bystander to this, I would.
Silas:
First—I have never shifted meanings on the definition of arithmetic. Arithmetic means the standard model of the natural numbers. I believe I’ve been quite consistent about this.
Second—as I’ve said many times, I believe that the most plausible candidates for the “fabric of the Universe” are mathematical structures like arithmetic. And as I’ve said many times, obviously I can’t prove this. The best I can do is explain why I find it so plausible, which I’ve tried to do in my book. If those arguments don’t move you, well, so be it. I’ve never claimed they were definitive.
Third—you seem to think (unless I’ve misread you) that this vision of the Universe is crucial to my point about Dawkins. It’s not.
Fourth—Here is my point about Dawkins; it would be helpful to know which part(s) you consider the locus of our disagreement:
a) the natural numbers—whether or not you buy my vision of them as the basis of reality—are highly complex by any reasonable definition (I am talking here about the actual standard model of the natural numbers, not some axiomatic system that partly describes them);
b) Dawkins has said, repeatedly, that all complexity—not just physical complexity, not just biological complexity, but all complexity—must evolve from something simpler. And indeed, his argument needs this statement in all its generality, because his argument makes no special assumption that would restrict us to physics or biology. It’s an argument about the nature of complexity itself.
c) Therefore, if we buy Dawkins’s argument, we must conclude that the natural numbers evolved from something simpler.
d) The natural numbers did not evolve from something simpler. Therefore Dawkins’s argument can’t be right.
It seems to me that the definition of complexity is the root of any disagreement here. It seems obvious to me that the natural numbers are not complex in the sense that a human being is complex. I don’t understand what kind of complexity you could be talking about that places natural numbers on an equivalent footing with, say, the entire ecosystem of the planet Earth.
Contrary to what SteveLandsburg says in his reply, I think you are exactly right. And this is how our disagreement originally started, by me explaining why he’s wrong about complexity.
Scientists use math to compress our description of the universe. It wouldn’t make much sense to use something infinitely complex for data compression!
So, to the extent he’s talking about math or arithmetic in a way that does have such complexity, he’s talking about something that isn’t particularly relevant to our universe.
I think the system of natural numbers is pretty damn complex. But the system of natural numbers is an abstract object and Dawkins likely never meant for his argument to apply to abstract objects, thinks all abstract objects are constructed by intelligences or denies the existence of abstract objects.
I think there is a good chance all abstract objects are constructed and a better chance that the system of natural numbers was constructed (or at least the system, when construed as an object and not a structural analog, is constructed and not discovered. That is numbers are more like adjectives then nouns, adjectives aren’t objects.)
mattnewport: This would seem to put you in the opposite corner from Silas, who thinks (if I read him correctly) that all of physical reality is computably describable, and hence far simpler than arithmetic (in the sense of being describable using only a small and relatively simple fragment of arithmetic).
Be that as it may, I’ve blogged quite a bit about the nature of the complexity of arithmetic (see an old post called “Non-Simple Arithmetic” on my blog). In brief: a) no set of axioms suffices to specify the standard model of arithmetic (i.e. to distinguish it from other models). And b) we have the subjective reports of mathematicians about the complexity of their subject matter, which I think should be given at least as much weight as the subjective reports of ecologists. (There are a c), d) and e) as well, but in this short comment, I’ll rest my case here.)
Your biggest problem here, and in your blog posts, is that you equivocate between the structure of the standard natural numbers (N) and the theory of that structure (T(N), also known as True Arithmetic). The former is recursive and (a reasonable encoding of) it has pretty low Kolmogorov complexity. The latter is wildly nonrecursive and has infinite K-complexity. (See almost any of Chaitin’s work on algorithmic information theory, especially the Omega papers, for definitions of the K-complexity of a formal system.)
The difference between these two structures comes from the process of translating between them. Once explained properly, it’s almost intuitive to a recursion theorist, or a computer scientist versed in logic, that there’s a computable reduction from any language in the Arithmetic Hierarchy to the language of true statements of True Arithmetic. This implies that going from a description of N to a truth-enumerator or decision procedure for T(N) requires a hypercomputer with an infinite tower of halting, meta-halting, … meta^n-halting … oracles.
However, it so happens that simulating the physical world (or rather, our best physical ‘theories’, which in a mathematical sense are structures, not theories) on a Turing machine does not actually require T(N), only N. We only use theories, as opposed to models, of arithmetic, when we go to actually reason from our description of physics to consequences. And any such reasoning we actually do, just like any pure mathematical reasoning we do, depends only on a finite-complexity fragment of T(N).
Now, how does this make biology more complex than arithmetic? Well, to simulate any biological creature, you need N plus a bunch of biological information, which together has more K-complexity than just N. To REASON about the biological creature, at any particular level of enlightenment, requires some finite fragment of T(N), plus that extra biological information. To enumerate all true statements about the creature (including deeply-alternating quantified statements about its counterfactual behaviour in every possible circumstance), you require the infinite information in T(N), plus, again, that extra biological information. (In the last case it’s of course rather problematic to say there’s more complexity there, but there’s certainly at least as much.)
Note that I didn’t know all this this morning until I read your blog argument with Silas and Snorri; I thank all three of you for a discussion that greatly clarified my grasp on the levels of abstraction in play here.
(This morning I would have argued strongly against your Platonism as well; tonight I’m not so sure...)
Splat: Thanks for this; it’s enlightening and useful.
The part I’m not convinced of this:
A squirrel is a finite structure; it can be specified by a sequence of A’s, C’s, G’s and T’s, plus some rules for protein synthesis and a finite number of other facts about chemistry. (Or if you think that leaves something out, it can be described by the interactions among a large but finite collection of atoms.) So I don’t see where we need all of N to simulate a squirrel.
Well, if you need to simulate a squirrel for just a little while, and not for unbounded lengths of time, a substructure of N (without closure under the operations) or a structure with a considerable amount of sharing with N (like 64-bit integers on a computer) could suffice for your simulation.
The problem you encounter here is that these substructures and near-substructures, once they reach a certain size, actually require more information to specify than N itself. (How large this size is depends on which abstract computer you used to define your instance of K-complexity, but the asymptotic trend is unavoidable.)
If this seems paradoxical, consider that after a while the shortest computer program for generating an open initial segment of N is a computer program for generating all of N plus instructions indicating when to stop.
Either way, it so happens that the biological information you’d need to simulate the squirrel dwarfs N in complexity, so even if you can find a sufficient substitute for N that’s “lightweight” you can’t possibly save enough to make your squirrel simulation less complex than N.
Splat:
1)
This depends on what you mean by “specify”. To distinguish N from other mathematical structures requires either an infinite (indeed non-recursive) amount of information or a second order specification including some phrase like “all predicates”. Are you referring to the latter? Or to something else I don’t know about?
2) I do not know Chaitin’s definition of the K-complexity of a structure. I’ll try tracking it down, though if it’s easy for you to post a quick definition, I’ll be grateful. (I do think I know how to define the K-complexity of a theory.) I presume that if I knew this, I’d know your answer to question 1).
3) Whatever the definition, the question remains whether K-complexity is the right concept here. Dawkins’s argument does not define complexity; he treats it as “we know it when we see it”. My assertion has been that Dawkins’s argument applies in a context where it leads to an incorrect conclusion, and therefore can’t be right. To make this argument, I need to use Dawkins’s intended notion of complexity, which might not be the same as Chaitin’s or Kolmogorov’s. And for this, the best I can do is to infer from context what Dawkins does and does not see as complex. (It is, clear from context that he sees complexity as a general phenomenon, not just a biological one.)
4) The natural numbers are certainly an extremely complex structure in the everyday sense of the word; after thousands of years of study, people are learning new and surprising things about them every day, and there is no expectation that we’ve even scratched the surface. This is, of course, a manifestation of the “wildly nonrecursive” nature of T(N), all of which is reflected in N itself. And this, again, seems pretty close to the way Dawkins uses the word.
5) I continue to be most grateful for your input. I see that SIlas is back to insisting that you can’t simulate a squirrel with a simple list of axioms, after having been told forty eight bajillion times (here and elsewhere) that nobody’s asserting any such thing; my claim is that you can simulate a squirrel in the structure N, not in any particular axiomatic system. Whether or not you agree, it’s a pleasure to engage with someone who’s not obsessed with pummelling straw men.
Replying out of order:
2) A quick search of Google Scholar didn’t net me a Chaitin definition of K-complexity for a structure. This doesn’t surprise me much, as his uses of AIT in logic are much more oriented toward proof theory than model theory. Over here you can see some of the basic definitions. If you read page 7-10 and then my explanation to Silas here you can figure out what the K-complexity of a structure means. There’s also a definition of algorithmic complexity of a theory in section 3 of the Chaitin.
According to these definitions, the complexity of N is about a few hundred bits for reasonable choices of machine, and the complexity of T(N) is &infty;.
1) It actually is pretty hard to characterize N extrinsically/intensionally; to characterize it with first-order statements takes infinite information (as above). The second-order characterization. by contrast, is a little hard to interpret. It takes a finite amount of information to pin down the model[*][PA2], but the second-order theory PA2 still has infinite K-complexity because of its lack of complete rules of inference.
Intrinsic/extensional characterizations, on the other hand, are simple to do, as referenced above. Really, Gödel Incompleteness wouldn’t be all that shocking in the first place if we couldn’t specify N any other way than its first-order theory! Interesting, yes, shocking, no. The real scandal of incompleteness is that you can so simply come up with a procedure for listing all the ground (quantifier-free) truths of arithmetic and yet passing either to or from the kind of generalizations that mathematicians would like to make is fraught with literally infinite peril.
3&4) Actually I don’t think that Dawkins is talking about K-complexity, exactly. If that’s all you’re talking about, after all, an equal-weight puddle of boiling water has more K-complexity than a squirrel does. I think there’s a more involved, composite notion at work that builds on K-complexity and which has so far resisted full formalization. Something like this, I’d venture.
The complexity of the natural numbers as a subject of mathematical study, while certainly well-attested, seems to be of a different sense than either K-complexity or the above. Further, it’s unclear whether we should really be placing the onus of this complexity on N, on the semantics of quantification in infinite models (which N just happens to bring out), or on the properties of computation in general. In the latter case, some would say the root of the complexity lies in physics.
Also, I very much doubt that he had in mind mathematical structures as things that “exist”. Whether it turns out that the difference in the way we experience abstractions like the natural numbers and concrete physical objects like squirrels is fundamental, as many would have it, or merely a matter of our perspective from within our singular mathematical context, as you among others suspect, it’s clear that there is some perceptible difference involved. It doesn’t seem entirely fair to press the point this much without acknowledging the unresolved difference in ontology as the main point of conflict.
Trying to quantify which thing is more complex is really kind of a sideshow, although an interesting one. If one forces both senses of complexity into the K-complexity box then Dawkins “wins”, at the expense of both of your being turned into straw men. If one goes by what you both really mean, though, I think the complexity is probably incommensurable (no common definition or scale) and the comparison is off-point.
5) Thank you. I hope the discussion here continues to grow more constructive and helpful for all involved.
Relevant link: http://lesswrong.com/lw/vh/complexity_and_intelligence/
Splat:
Thanks again for bringing insight and sanity to this discussion. A few points:
1) Your description of the structure N presupposes some knowledge of the structure N; the program that prints out the structure needs a first statement, a second statement, etc. This is, of course, unavoidable, and it’s therefore not a complaint; I doubt that there’s any way to give a formal description of the natural numbers without presupposing some informal understanding of the natural numbers. But what it does mean, I think, is that K-complexity (in the sense that you’re using it) is surely the wrong measure of complexity here—because when you say that N has low K-complexity, what you’re really saying is that “N is easy to describe provided you already know something about N”. What we really want to know is how much complexity is imbedded in that prior knowledge.
1A) On the other hand, I’m not clear on how much of the structure of N is necessarily assumed in any formal description, so my point 1) might be weaker than I’ve made it out to be.
2) It has been my position all along that K-complexity is largely a red herring here in the sense that it need not capture Dawkins’s meaning. Your observation that a pot of boiling water is more K-complex than a squirrel speaks directly to this point, and I will probably steal it for use in future discussions.
3) When you talk about T(N), I presume you mean the language of Peano arithmetic, together with the set of all true statements in that language. (Correct me if I’m wrong.) I would hesitate to call this a theory, because it’s not recursively axiomatizable, but that’s a quibble. In any event, we do know what we mean by T(N), but we don’t know what we mean by T(squirrel) until we specify a language for talking about squirrels—a set of constant symbols corresponding to tail, head, etc., or one for each atom, or....., and various relations, etc. So T(N) is well defined, while T(squirrel) is not. But whatever language you settle on, a squirrel is still going to be a finite structure, so T(squirrel) is not going to share the “wild nonrecursiveness” of T(N) (which is closely related to the difficulty of giving an extrinsic characterization). That seems to me to capture a large part of the intuition that the natural numbers are more complex than a squirrel,
4) You are probably right that Dawkins wasn’t thinking about mathematical structures when he made his argument. But because he does claim that his argument applies to complexity in general, not just to specific instances, he’s stuck (I think) either accepting applications he hadn’t thought about or backing off the generality of his claim. It’s of course hard to know exactly what he meant by complexity, but it’s hard for me to imagine any possible meaning consistent with Dawkins’s usage that doesn’t make arithmetic (literally) infinitely more complex than a squirrel.
5) Thanks for trying to explain to Silas that he doesn’t understand the difference between a structure and an axiomatic system. I’ve tried explaining it to him in many ways, at many times, in many forums, but have failed to make any headway. Maybe you’ll have better luck.
6) If any of this seems wrong to you, I’ll be glad to be set straight.
1) Unless they say otherwise, you should assume someone is using the standard meanings for the terms they use, which would mean Dawkins is using the intuitive definition, which closely parallels K-complexity.
2) If you’re going to write a book hundreds of pages long in which you crucially rely on the concept of complexity, you need to explicitly to define it. That’s just how it works. If you know what concept of complexity is “the” right one here, you need to spell it out yourself.
3) Most importantly, you have shown Dawkins’s argument to be in error in the context of an immaterial realm that is not observable and does not interact with this universe. Surely, you can think of some reason why Dawkins doesn’t intend to refer to such realms, can’t you? (Hint: Dawkins is an atheist, materialist, and naturalist—just like you, in other words, until it comes to the issue of math.)
ETA: If any followers of this exchange think I’m somehow not getting someting, or being unfair to SteveLandsburg, please let me know, either as a reply in the thread or a PM, whether or not you use your normal handle.
Well, Silas, what I actually did was write a book 255 pages long of which this whole Dawkins/complexity thing occupies about five pages (29-34) and where complexity is touched on exactly once more, in a brief passage on pages 7-8. From the discrepancy between your description and reality, I infer that you haven’t read the book, which would help to explain why your comments are so bizarrely misdirected.
Oh, and I see that you’re still going on about axiomatic descriptions of squirrels, as if that were relevant to something I’d said. (Hint: A simulation is not an axiomatic system. That’s 48 bajillion and one.)
I have not read the entire book. I have read many long portions of it, mostly the philosophical ones and those dealing with physics. I was drawn to on the assumption that, surely you would have defined complexity in your exposition!
It’s misleading to say that your usage of complexity only takes 8 pages, so it’s insignificant. Rather, the point you make about complexity is your grounding for broader claims about the role mathematics plays in the universe, which you come back to frequently. The explicit mention of the term “complexity” is thus a poor measure of how much you rely on it.
But even if it were just 8 pages, you should still have defined it, and you should still not expect to have achieved insights on the topic, given that you haven’t defined it.
(I certainly wouldn’t want to buy it—why should I subsidize such confused thinking? I don’t even like your defenses of libertarianism, despite being libertarian.)
Ah, another suddenly-crucial distinction to make, so you can wiggle out of being wrong!
I should probably use this opportunity to both show I did read many portions, and show why Landsburg doesn’t get what it means to really explain something. His explanation of the Heisenberg Uncertainty Principle (which gets widely praised as a good explanation for some reason) is this: think of an electron as moving in a circle within a square. If you measure its vertical position, its closeness to the top determines the chance of getting a “top” or “bottom” reading.
Likewise the horizontal direction: if you measure the horizontal position of the electron, your chances of getting a “left” or “right” reading depends on how far it is from that side.
And for the important part: why can’t you measure both at the same time? Landsburg’s brilliant explanation: um, because you can’t.
But that’s what the explanation was supposed to demystify in the first place! You can’t demystify by feeding that very mystery a blackbox fact unto itself. To explain it, you would need to explain enough of the dynamics of quantum systems so that, at the end, your reader doesn’t view precise measurement of both position and momentum as even being coherent! Saying, “oh, you can’t because you can’t” isn’t an explanation.
I didn’t say that. Read it again. I said that there is some finite axiom list that can describe squirrels, but it’s not just the axioms that suffice to let you use arithmetic. It’s those, plus biological information about squirrels. But this arithmetic is not the infinitely complex arithmetic you talk about in other contexts!
You can’t—you need axioms beyond those that specify N. The fact that the biological model involving those axioms uses math, doesn’t mean you’ve described it once you’ve described the structure N. So whether or not you call that “simulating it in the structure N”, it’s certainly more complex than just N.
I’m responding here to your invitation in the parent, since this post provides some good examples of what you’re not getting.
Simulating squirrels and using arithmetic require information, but that information is not supplied in the form of axioms. The best way to imagine this in the case of arithmetic is in terms of a structure.
Starting from the definition in that wikipedia page, you can imagine giving the graphs of the universe and functions and relations as Datalog terms. (Using terms instead of tuples keeps the graphs disjoint, which will be important later.) Like so:
Universe:
is_number(0)
,is_number(1)
, …0:
zero(0)
S:
next(0,1)
,next(1,2)
, …+:
add_up_to(0,0,0)
,add_up_to(0,1,1)
,add_up_to(1,0,1)
…and so on.
Then you use some simple recursive coding of datalog terms as binary. What you’re left with is just a big (infinite) set of binary strings. The Kolmogorov complexity of the structure N, then (the thing you need to use arithmetic) is the size of the shortest program that enumerates the set, which is actually very small.
Note that this is actually the same arithmetic that Steve is talking about! It is just a different level of description, one that is much simpler but entirely sufficient to conduct simulations with. It is only in understanding the long-term behavior of simulations without running them that one requires any of the extra complexity embodied in T(N) (the theory). To actually run them you just need N (the structure).
The fact that you don’t seem to understand this point yet leads me to believe you were being a little unfair when you said:
Now, if you want to complete the comparison, imagine you’re creating a structure that includes a universe with squirrel-states and times, and a function from time to squirrel state. This would look something like:
is_time(1:00:00)
,is_time(1:00:01)
, …is_squirrel_state(<eating nut>)
,is_squirrel_state(<rippling tail>)
,is_squirrel_state(<road pizza>)
squirrel_does(1:00:00, <rippling tail>)
, …The squirrel states, though, will not be described by a couple of words like that, but by incredibly detailed descriptions of the squirrel’s internal state—what shape all its cells are, where all the mRNAs are on their way to the ribosomes, etc. The structure you come up with will take a much bigger program to enumerate than N will. (And I know you already agree with the conclusion here, but making the correct parallel matters.)
(Edit: fixed markup.)
I wasn’t careful to distinguish axioms from other kinds of information in the model, and I think it’s a distraction to do so because it’s just an issue of labels (which as you probably saw from the discussion is a major source of confusion). My focus was on tabulating the total complexity of whatever-is-being-claimed-is-significant. For that, you only need to count up how much information goes into your “message” describing the data (in the “Minimum Message Length criterion” sense of “message”). Anything in such a message can be described without loss of generality as an axiom.
If I want to describe squirrels, I will find, like most scientists find, that the job is much easier of I can express things using arithmetic. Arithmetic is so helpful that, even after accounting for the cost of telling you how to use it (the axioms-or-whatever of math), I still save in total message length. Whether you call the squirrel info I gathered from nature, or the specification of math, the “axioms” doesn’t matter.
But it’s not the same arithmetic SteveLandsburg is talking about, if you follow through to the implications he claims fall out from it. He claims arithmetic—the infinitely complex one—runs the universe. It doesn’t. The universe only requires the short message specifying N, plus the (finite) particulars of the universe. Whatever infinitely-complex thing he’s talking about from a “different level of description” isn’t the same thing, and can’t be the same thing.
What’s more, the universe can’t contain that thing because there is no (computable) isomorphism between it and the universe. As we derive the results of longer and longer chains of reasoning, our universe starts to contain more and more complex pieces of that thing, but it still wouldn’t be somehow fundamental to the universe’s operation—not if we’re just now getting to contain pieces of it.
I’m sorry, I don’t see how that contradicts what I said or shows a different parallel. Now, I certainly didn’t use the N vs. T(N) terminology you did, but I clearly explained how there have to be two separate “arithmetics” in play here, as best summarized in my comment here. Whatever infinitely complex arithmetic SteveLandsburg is talking about, isn’t the one that runs the universe. The insights on one don’t apply to the other.
Okay, pretend I’ve given you the axioms sufficient for you to +-*/. Can simulate squirrels now? Of course not. You still have to go out and collect information about squirrels and add it to your description of the axioms of arithmetic (which suffice for all of N) to have a description of squirrels.
You claim that because you can simulate squirrels with (a part of) N, then N suffices to simulate squirrels. But this is like saying that, because you know the encoding method your friend uses to send you messages, you must know the content of all future messages.
That’s wrong, because those are different parts of the compressed data: one part tells you how to decompress, another tells you what you’re decompressing. Knowing how to decompress (i.e., the axioms of N) is different from knowing the string to be decompressed by that method (i.e. the arithmetic symbols encoding squirrels).
By the way, I really hope your remark about Splat’s comment being “enlightening” was just politeness, and that you didn’t actually mean it. Because if you did, that would mean you’re just now learning the distinction between N and T(N), the equivocation between which undermines your claims about arithmetic’s relation to the universe.
And much of his comment was a restatement of my point about the difference between the complex arithmetic you refer to, and the arithmetic the universe actually runs on. (I’m not holding my breath for a retraction or a mea culpa or anything, just letting people know what they’re up against here.)
Because remember—nothing is more important to SilasBarta than politeness!
Touche :-P
Again, this word complexity is used in many ways. Complexity in the sense of humans find this complicated is a different concept from complexity in the sense of Kolmogorov complexity.
Don’t worry guys, I didn’t let you down. I addressed the issue from the perspective of Kolmogorv complexity in my first blog response. Landsburg initially replied with (I’m paraphrasing), “so what if you became an expert on information theory? That’s not the only meaning of complexity.”
Only later did he try to claim that he also meets the Kolmogorov definition.
(And FWIW, I’m not an expert on information theory—it’s just a hobby. I guess my knowledge just looked impressive to someone...)
Then what do you mean when you say “integers”^H^H “natural numbers”, if no set of premises suffices to talk about it as opposed to something else?
Anyway, no countable set of first-order axioms works. But a finite set of second-order axioms work. So to talk about the natural numbers, it suffices merely to think that when you say “Any predicate that is true of zero, and is true of the successor of every number it is true of, is true of all natural numbers” you made sense when you said “any predicate”.
It is this sort of minor-seeming yet important technical inaccuracy that separates “The Big Questions” from “Good and Real”, I’m afraid.
Natural numbers, rather. (Minor typo.)
I think that you have to be careful about claims that second-order logic fixes a unique model. Granted, you can derive the statement “There exists a unique model of the axioms of arithmetic.”
But, for example, what in reality does your “any predicate” quantifier range over? If, under interpretation, it ranges over subsets of the domain of discourse, well, what exactly constitutes a subset? This presumes that you have a model of some set theory in hand. How do you specify which model of set theory you’re using? So far as I know, there’s no way out of this regress.
[ETA: I’m not a logician. I’m definitely open to correction here.]
[ETA2: And now that I read more carefully, you were acknowledging this point when you wrote, “it suffices merely to think that . . . you made sense when you said ‘any predicate’.”
However, you didn’t acknowledge this issue in your earlier comment. I think that it’s too significant an issue to be dismissed with an “it suffices merely...”. When an infinite regress threatens, it doesn’t suffice to push the issue back a level and say “it suffices merely to show that that’s the last level.”]
Sure, and that’s the age-old argument for why we should not take second-order logic at face value. But in this case we cannot go around blithely talking about the integers for there is no language we could use to speak of them, or any other infinite set. We would be forbidden of saying that there is something we cannot talk about, and this is not surprising—what is it you can’t refer to?
I’m not familiar with the literature of this argument. (It was probably clear from the tentativeness of my comment that I was thinking my own murky way through this issue.)
You seem to take it as the default that we should take second-order logic at face value. (Now that I know what you mean by “face value”, I see that you did acknowledge this issue in your earlier comment.) But I should think that the default would be to be skeptical about this. Why expect that we have a canonical model when we talk about sets or predicates if we’re entertaining skepticism that we have a canonical model for integer-talk?
We don’t. Skepticism of sets, predicates, and canonical integers are all the same position in the debate.
And so is skepticism of canonical Turing machines, as far as I can tell. Specifically, skepticism that there is always a fact of the matter as to whether a given TM halts.
I think you might be able to make the skeptical position precise by constructing nonstandard variants of TMs where the time steps and tape squares are numbered with nonstandard naturals, and the number of symbols and states are also nonstandard, and you would be able to relate these back to the nonstandard models that produced them by using a recursive description of N to regenerate the nonstandard model of the natural numbers you started with. This would show that there are nonstandard variants of computability that all believe in different ‘standard’, ‘minimal’ models of arithmetic, and are unaware of the existence of smaller models, and thus presumably of the ‘weaker’ (because they halt less often) notions of Turing Machines.
Now, I’m not yet sure if this construction goes through as I described it; for me, if it does it weighs against the existence of a ‘true’ Standard Model and if it doesn’t it weighs in favor.
I’m not sure, but I think it’s impossible to construct a computable nonstandard model of the integers (one where you can implement operations like +).
It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation. What I was referring to was a nonstandard model that was computable according to an equally nonstandard definition of computation, one that makes explicit the definitional dependence of Turing Machines on the standard natural numbers and replaces them with nonstandard ones.
The question I’m wondering about is whether such a definition leads to a sensible theory of computation (at least on its own terms) or whether it turns out to just be nonsense. This may have been addressed in the literature but if so it’s beyond the level to which I’ve read so far.
Would you give a reference? I found it easy to find assertions such as “the completeness theorem is not constructively provable,” but this statement is a little stronger.
Tennenbaum’s Theorem
I believe that this claim is based on a defective notion of what it takes to refer to something successfully. The issue that we’re talking about here is a manifestation of that defect. I’m trying to work out a different conception of reference, but it’s very much a work in progress.
No, it wouldn’t—he’s saying basically the same thing I did. The laws of physics are computable. In describing observations, we use concepts from math. The reason we do so is that it allows simpler descriptions of the universe.
Right, I’ve explained before why your arguments are in error. We can talk more about that some other time.
No, I accept that they’re separate errors.
Okay:
If what you describe here is what you mean by both “the natural numbers” and “the actual standard model of the natural numbers”, then I will accept this definition for the purposes of argument, but that, using it consistently, it doesn’t have the properties you claim.
Disagree with this. Dawkins has been referring to existing complexity in the universe and the context of every related statement confirms this. But even accepting it, the rest of your argument still doesn’t follow.
Disagree. Again, let’s keep the same definition throughout. Recall what you said the natural numbers were:
The model arose from something simpler (like basic human cognition of counting of objects). The Map Is Not The Territory.
Ah, but now I know what you’re going to say: you meant the sort of Platonic-space model of those natural numbers, that exists independently of whatever’s in our universe, has always been complex.
So, if you assume (like theists) that there’s some sort of really-existing realm, outside of the universe, that always has been, and is complex, then you can prove that … there’s a complexity that has always existed. Which is circular.
Silas: I agree that if arithmetic is a human invention, then my counterexample goes away.
If I’ve read you correctly, you believe that arithmetic is a human invention, and therefore reject the counterexample.
On that reading, a key locus of our disagreement is whether arithmetic is a human invention. I think the answer is clearly no, for reasons I’ve written about so extensively that I’d rather not rehash them here.
I’m not sure, though, that I’ve read you correctly, because you occasionally say things like “The Map Is Not The Territory” which seems to presuppose some sort of platonic Territory. But maybe I just don’t understand what you meant by this phrase.
[Incidentally, it occurs to me that perhaps you are misreading my use of the word “model”. I am using this word in the technical sense that it’s used by logicians, not in any of its everyday senses.]
Map and territory
More: Map and Territory (sequence)
Then you agree that your “counterexample” amounts to an assumption. If a Platonic realm exists (in some appropriate sense), and if Dawkins was haphazardly including that sense in the universe he is talking about when he describes complexity arising, then he wrong that complexity always comes from simplicity.
If you assume Dawkins is wrong, he’s wrong. Was that supposed to be insightful?
It’s a false dispute, though. When you clarify the substance of what these terms mean, there are meanings for which we agree, and meanings for which we don’t. The only error is to refuse to “cash out” the meaning of “arithmetic” into well-defined predictions, but instead keep it boxed up into one ambiguous term, which you do here, and which you did for complexity. (And it’s kind of strange to speak for hundreds of pages about complexity, and then claim insights on it, without stating your definition anywhere.)
One way we’d agree, for example, is if we take your statements about the Platonic realm to be counterfactual claims about phenomena isomorphic to certain mathematic formalisms (as I said at the beginning of the thread).
The definitions aren’t incredibly different, which is why we have the same term for both of them. If you spell out that definition more explicitly, the same problems arise, or different ones will pop up.
(By the way, this doesn’t surprise me. This is the fourth time you’ve had to define a term within a definition you gave in order to avoid being wrong. It doesn’t mean you changed that “subdefinition”. But genuine insights about the world don’t look this contorted, where you have to keep saying, “No, I really meant this when I was saying what I meant by that.”)
Silas: This is really quite frustrating. I keep telling you exactly what I mean by arithmetic (the standard model of the natural numbers); I keep using the word to mean this and only this, and you keep claiming that my use of the word is either ambiguous or inconsistent. It makes it hard to imagine that you’re actually reading before you’re responding, and it makes it very difficult to carry on a dialogue. So for that reason, I think I’ll stop here.
When I saw this in the comment feed, I thought “Wow, Steve Landsburg on Less Wrong!” Then I saw that he was basically just arguing with one person.
While I think you’re not correct in this debate, I hope you’ll continue to post here. Your books have been a source of much entertainment and joy for me.
Bo102010: Thanks for the kind words. I’m not sure what the community standards are here, but I hope its not inappopriate to mention that I post to my own blog almost every weekday, and of course I’ll be glad to have you visit.
I can second that. Though, for a lack of education, I cannot tell who’s right in this debate, I don’t think anybody is for that it is just pure metaphysical musing about the nature of reality. But so far I really enjoyed reading your book. I also hope you’ll participate in other discussions here at lesswrong.com. It’s my favorite place.
Sorry for possible bad publicity, I committed the mistake to quick-share something which I’ve just read and found intriguing. Without the ability to portray it adequately. Especially on this forum which is rather about rationality as practical tool to attain your goals and not pure philosophy detached from evidence and prediction.
I also subscribed to your blog.
P.S. Send you a message, you can find it in your inbox.
Are you reading my replies? Saying that arithmetic is “the standard model of the natural numbers” does not
For one thing, it doesn’t give me predictions (i.e. constraints on expectations) that we check to see who’s right.
For another, it’s not well-defined—it doesn’t tell me how I would know (as is necessary for the area of dispute) if arithmetic “exists” at this or that time. (And, of course, as you found out, it requires further specification of what counts as a model...)
(ETA: See Eliezer_Yudkowsky’s great posts on how to dissolve a question and get beyond there being One Right Answer to e.g. the vague question about a tree falling in the forest when no one’s around.)
So if you don’t see how that doesn’t count as cashing out the term and identifying the real disagreement, then I agree further discussion is pointless.
But truth be told, you’re not going to “stop there”. You going to continue on, promoting your “deep” insights, wherever you can, to people who don’t know any better, instead of doing the real epistemic labor achieving insights on the world.
That doesn’t sound right. Can you point me to for example a Wikipedia page about this?
First-order logic can’t distinguish between different sizes of infinity. Any finite or countable set of first-order statements with an infinite model has models of all sizes.
However, if you take second-order logic at face value, it’s actually quite easy to uniquely specify the integers up to isomorphism. The price of this is that second-order logic is not complete—the full set of semantic implications, the theorems which follow, can’t be derived by any finite set of syntactic rules.
So if you can use second-order statements—and if you can’t, it’s not clear how we can possibly talk about the integers—then the structure of integers, the subject matter of integers, can be compactly singled out by a small set of finite axioms. However, the implications of these axioms cannot all be printed out by any finite Turing machine.
Appropriately defined, you could state this as “finitely complex premises can yield infinitely complex conclusions” provided that the finite complexity of the premises is measured by the size of the Turing machine which prints out the axioms, yielding is defined as semantic implication (that which is true in all models of which the axioms are true), and the infinite complexity of the conclusions is defined by the nonexistence of any finite Turing machine which prints them all.
However this is not at all the sort of thing that Dawkins is talking about when he talks about evolution starting simple and yielding complexity. That’s a different sense of complexity and a different sense of yielding.
That makes more sense, thanks.
Any recommended reading on this sort of thing?
Decidability of Euclidean geometry#Some_decidable_theories).
I don’t know where Landsburg gets the claim that we can know all the truths of arithmetic.
Richard Kennaway:
I don’t know where Landsburg gets the claim that we can know all the truths of arithmetic.
I don’t know where you got the idea that I’d ever make such a silly claim.
I misinterpreted this: “we can never know all the truths of euclidean geometry, but we can still specify euclidean geometry via a set of axioms. Not so for arithmetic.”
Richard: Gotcha. Sorry if it was unclear which part the “not so” referred to.
Note that Landsburg is thus also incorrect in saying “we can never know all the truths of euclidean geometry”.
Eliezer: There are an infinite number of truths of euclidean geometry. How could our finite brains know them all?
This was not meant to be a profound observation; it was meant to correct Silas, who seemed to think that I was reading some deep significance into our inability to know all the truths of arithmetic. My point was that there are lots of things we can’t know all the truths about, and this was therefore not the feature of arithmetic I was pointing to.
A decision procedure is a finite specification of all truths of euclidean geometry; I can use that finite fact anywhere I could use any truth of geometry. I suppose there is a difference, but even so, it’s the wrong thing to say in a Godelian discussion.
Yes, it was. When I and several others pointed out that arithmetic isn’t actually complex, you responded by saying that it is infinitely complex, because it can’t be finitely described, because to do so … you’d have to know all the truths.
Am I misreading that response? If so, how do you reconcile arithmetic’s infinite complexity with the fact that scientists in fact use it to compress discriptions of the world? An infinitely complex entity can’t help to compress your descriptions.
What is this “it”? There are some who claim that when we think about arithmetic, we are thinking about a specific model of the usual axioms for arithmetic, which appears to be your view here. Every statement of arithmetic is either true or false in that model. But what reason is there to make this claim? We cannot directly intuit the truth of arithmetical statements, or mathematicians would not have to spend so much effort on proving theorems. We may observe that we have a belief that we are indeed thinking about a definite model of the axioms, but why should we believe that belief?
To say that we intuit a thing is no more than to say we believe it but do not know why.
As far as I understand it, the claim is that both camps are asking wrong or useless questions. Reality is inherently complex and logical possible. To ask for why complexity is generally there is asking for rainbows end. But I’ve only arrived on page 34 of his book today so...
Anyway, I’ve to get some sleep soon. Will come back to it tomorrow. Thanks.
Humans are born with a basic sense of arithmetic that evolved over millions of years. Arithmetic also doesn’t happen to be very complex.
To nitpick, arithmetic is a cultural invention just like all other tools.
As far as I remember the children of the Pirahã performed better on arithmetric task than their parents.
There a lot of stuff like differentiating two monkey faces that small children are able to do but that gets lost when they grow up and those skills aren’t used.
So we could have evolved to put 3 apples onto a table, count them and sum them up to 4 apples? Humans are born with sight, does that imply that light evolved by natural selection? And I don’t know what makes you think that arithmetic is not complex, when indeed not even an infinite set of axioms can fully describe it. Also arithmetic is irreducible complex, eliminate the number three and all of arithmetic falls apart. The patterns that emerge just from the existence of the natural numbers are infinitely. Anyway, I thought the quote is food for thought, so I posted it. The book is a good read, highly recommended.