Created Already In Motion
Followup to: No Universally Compelling Arguments, Passing the Recursive Buck
Lewis Carroll, who was also a mathematician, once wrote a short dialogue called What the Tortoise said to Achilles. If you have not yet read this ancient classic, consider doing so now.
The Tortoise offers Achilles a step of reasoning drawn from Euclid’s First Proposition:
(A) Things that are equal to the same are equal to each other.
(B) The two sides of this Triangle are things that are equal to the same.
(Z) The two sides of this Triangle are equal to each other.
Tortoise: “And if some reader had not yet accepted A and B as true, he might still accept the sequence as a valid one, I suppose?”
Achilles: “No doubt such a reader might exist. He might say, ‘I accept as true the Hypothetical Proposition that, if A and B be true, Z must be true; but, I don’t accept A and B as true.’ Such a reader would do wisely in abandoning Euclid, and taking to football.”
Tortoise: “And might there not also be some reader who would say, ‘I accept A and B as true, but I don’t accept the Hypothetical’?”
Achilles, unwisely, concedes this; and so asks the Tortoise to accept another proposition:
(C) If A and B are true, Z must be true.
But, asks, the Tortoise, suppose that he accepts A and B and C, but not Z?
Then, says, Achilles, he must ask the Tortoise to accept one more hypothetical:
(D) If A and B and C are true, Z must be true.
Douglas Hofstadter paraphrased the argument some time later:
Achilles: If you have [(A⋀B)→Z], and you also have (A⋀B), then surely you have Z.
Tortoise: Oh! You mean <{(A⋀B)⋀[(A⋀B)→Z]}→Z>, don’t you?
As Hofstadter says, “Whatever Achilles considers a rule of inference, the Tortoise immediately flattens into a mere string of the system. If you use only the letters A, B, and Z, you will get a recursive pattern of longer and longer strings.”
By now you should recognize the anti-pattern Passing the Recursive Buck; and though the counterspell is sometimes hard to find, when found, it generally takes the form The Buck Stops Immediately.
The Tortoise’s mind needs the dynamic of adding Y to the belief pool when X and (X→Y) are previously in the belief pool. If this dynamic is not present—a rock, for example, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y until the end of eternity, without ever getting to Y.
The phrase that once came into my mind to describe this requirement, is that a mind must be created already in motion. There is no argument so compelling that it will give dynamics to a static thing. There is no computer program so persuasive that you can run it on a rock.
And even if you have a mind that does carry out modus ponens, it is futile for it to have such beliefs as...
(A) If a toddler is on the train tracks, then pulling them off is fuzzle.
(B) There is a toddler on the train tracks.
...unless the mind also implements:
Dynamic: When the belief pool contains “X is fuzzle”, send X to the action system.
(Added: Apparently this wasn’t clear… By “dynamic” I mean a property of a physically implemented cognitive system’s development over time. A “dynamic” is something that happens inside a cognitive system, not data that it stores in memory and manipulates. Dynamics are the manipulations. There is no way to write a dynamic on a piece of paper, because the paper will just lie there. So the text immediately above, which says “dynamic”, is not dynamic. If I wanted the text to be dynamic and not just say “dynamic”, I would have to write a Java applet.)
Needless to say, having the belief...
(C) If the belief pool contains “X is fuzzle”, then “send ‘X’ to the action system” is fuzzle.
...won’t help unless the mind already implements the behavior of translating hypothetical actions labeled ‘fuzzle’ into actual motor actions.
By dint of careful arguments about the nature of cognitive systems, you might be able to prove...
(D) A mind with a dynamic that sends plans labeled “fuzzle” to the action system, is more fuzzle than minds that don’t.
...but that still won’t help, unless the listening mind previously possessed the dynamic of swapping out its current source code for alternative source code that is believed to be more fuzzle.
This is why you can’t argue fuzzleness into a rock.
Part of The Metaethics Sequence
Next post: “The Bedrock of Fairness”
Previous post: “The Moral Void”
- Morality is Awesome by 6 Jan 2013 15:21 UTC; 146 points) (
- Where Recursive Justification Hits Bottom by 8 Jul 2008 10:16 UTC; 123 points) (
- The genie knows, but doesn’t care by 6 Sep 2013 6:42 UTC; 119 points) (
- On attunement by 25 Mar 2024 12:47 UTC; 98 points) (
- Don’t Double-Crux With Suicide Rock by 1 Jan 2020 19:02 UTC; 91 points) (
- Sam Harris and the Is–Ought Gap by 16 Nov 2018 1:04 UTC; 90 points) (
- The Moral Void by 30 Jun 2008 8:52 UTC; 78 points) (
- Morality as Fixed Computation by 8 Aug 2008 1:00 UTC; 72 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- The Meaning of Right by 29 Jul 2008 1:28 UTC; 61 points) (
- A hermeneutic net for agency by 1 Jan 2024 8:06 UTC; 58 points) (
- Inseparably Right; or, Joy in the Merely Good by 9 Aug 2008 1:00 UTC; 57 points) (
- A short conceptual explainer of Immanuel Kant’s Critique of Pure Reason by 3 Jun 2022 1:06 UTC; 57 points) (
- The Bedrock of Fairness by 3 Jul 2008 6:00 UTC; 56 points) (
- The possible shared Craft of deliberate Lexicogenesis by 20 May 2023 5:56 UTC; 49 points) (
- No Universally Compelling Arguments in Math or Science by 5 Nov 2013 3:32 UTC; 49 points) (
- Normativity by 18 Nov 2020 16:52 UTC; 47 points) (
- Fundamental Doubts by 12 Jul 2008 5:21 UTC; 38 points) (
- What is wisdom? by 14 Nov 2023 2:13 UTC; 37 points) (
- 3 Oct 2011 13:16 UTC; 28 points) 's comment on Open thread, October 2011 by (
- The Problematic Third Person Perspective by 5 Oct 2017 20:44 UTC; 28 points) (
- Fake Norms, or “Truth” vs. Truth by 22 Jul 2008 10:23 UTC; 28 points) (
- On attunement by 25 Mar 2024 12:47 UTC; 27 points) (EA Forum;
- Comment on “Deception as Cooperation” by 27 Nov 2021 4:04 UTC; 23 points) (
- 15 Jan 2016 10:00 UTC; 21 points) 's comment on Open Thread, January 11-17, 2016 by (
- Asking for help as an O(1) lookup by 17 Sep 2023 0:25 UTC; 19 points) (
- Abstraction is Bigger than Natural Abstraction by 31 May 2023 0:00 UTC; 18 points) (
- 11 Feb 2022 18:18 UTC; 13 points) 's comment on Simplify EA Pitches to “Holy Shit, X-Risk” by (EA Forum;
- 16 Nov 2018 21:52 UTC; 13 points) 's comment on Sam Harris and the Is–Ought Gap by (
- 9 Jun 2023 9:05 UTC; 12 points) 's comment on Answer to a question: what do I think about God’s communication patterns? by (
- 1 Sep 2021 5:23 UTC; 12 points) 's comment on Grokking the Intentional Stance by (
- 23 Apr 2013 15:46 UTC; 11 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- What is/are the definition(s) of “Should”? by 1 Jun 2011 17:55 UTC; 10 points) (
- Rationality Reading Group: Part V: Value Theory by 10 Mar 2016 1:11 UTC; 9 points) (
- 1 Aug 2012 19:32 UTC; 9 points) 's comment on [SEQ RERUN] The Bedrock of Morality: Arbitrary? by (
- 19 Mar 2014 0:04 UTC; 8 points) 's comment on The Problem with AIXI by (
- 17 Feb 2014 10:08 UTC; 8 points) 's comment on Steelmanning Young Earth Creationism by (
- 16 Jun 2013 10:10 UTC; 7 points) 's comment on Effective Altruism Through Advertising Vegetarianism? by (
- [SEQ RERUN] Created Already In Motion by 21 Jun 2012 3:02 UTC; 6 points) (
- 9 Nov 2019 2:58 UTC; 6 points) 's comment on Building Intuitions On Non-Empirical Arguments In Science by (
- 24 Nov 2013 5:49 UTC; 5 points) 's comment on Probability and radical uncertainty by (
- 3 Oct 2012 16:43 UTC; 5 points) 's comment on The Useful Idea of Truth by (
- 10 Oct 2009 8:35 UTC; 5 points) 's comment on I’m Not Saying People Are Stupid by (
- 19 Aug 2009 20:57 UTC; 5 points) 's comment on Ingredients of Timeless Decision Theory by (
- 25 Apr 2010 0:16 UTC; 4 points) 's comment on The role of mathematical truths by (
- 2 Nov 2021 23:44 UTC; 4 points) 's comment on Feature Selection by (
- 21 Nov 2021 20:42 UTC; 4 points) 's comment on From language to ethics by automated reasoning by (
- 19 May 2022 21:47 UTC; 4 points) 's comment on Deepmind’s Gato: Generalist Agent by (
- 20 Jan 2012 18:41 UTC; 3 points) 's comment on A Simple Solution to the FAI Problem by (
- 11 Feb 2013 14:30 UTC; 3 points) 's comment on Confusion about Normative Morality by (
- 14 Feb 2014 13:22 UTC; 3 points) 's comment on White Lies by (
- 27 Mar 2015 10:08 UTC; 3 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 28 Oct 2012 10:18 UTC; 3 points) 's comment on Prediction market sequence requested by (
- 16 Nov 2013 18:14 UTC; 3 points) 's comment on Mainstream Epistemology for LessWrong, Part 1: Feldman on Evidentialism by (
- 15 Mar 2013 12:55 UTC; 3 points) 's comment on Decision Theory FAQ by (
- Abstraction is Bigger than Natural Abstraction by 31 May 2023 0:00 UTC; 2 points) (EA Forum;
- 4 Aug 2012 15:46 UTC; 2 points) 's comment on “Epiphany addiction” by (
- 29 Nov 2018 6:35 UTC; 2 points) 's comment on Sam Harris and the Is–Ought Gap by (
- 29 Nov 2018 5:49 UTC; 2 points) 's comment on Sam Harris and the Is–Ought Gap by (
- 1 Sep 2021 17:51 UTC; 2 points) 's comment on Grokking the Intentional Stance by (
- 12 Nov 2012 21:26 UTC; 2 points) 's comment on Rationality Quotes November 2012 by (
- 13 Mar 2012 11:44 UTC; 1 point) 's comment on Real-life expected utility maximization [response to XiXiDu] by (
- 20 Aug 2019 22:19 UTC; 1 point) 's comment on Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours by (
- 6 Aug 2014 20:05 UTC; 1 point) 's comment on Six Plausible Meta-Ethical Alternatives by (
- 19 Dec 2013 11:06 UTC; 1 point) 's comment on International cooperation vs. AI arms race by (
- 12 Jan 2011 19:47 UTC; 0 points) 's comment on Confidence levels inside and outside an argument by (
- 1 Feb 2015 19:56 UTC; 0 points) 's comment on My Skepticism by (
- 30 Nov 2008 4:58 UTC; 0 points) 's comment on Chaotic Inversion by (
- 26 Apr 2012 16:54 UTC; 0 points) 's comment on The Craft And The Community: Wealth And Power And Tsuyoku Naritai by (
- 16 Dec 2014 20:35 UTC; 0 points) 's comment on Stupid Questions December 2014 by (
- 11 Dec 2013 9:55 UTC; 0 points) 's comment on Open thread for December 9 − 16, 2013 by (
- 19 Mar 2012 13:38 UTC; 0 points) 's comment on Open Thread, March 16-31, 2012 by (
- 5 Jun 2013 8:54 UTC; -2 points) 's comment on Requesting advice: Doing Epistemology Right (Warning: Abstract mainstream Philosophy herein) by (
- 15 Feb 2014 3:31 UTC; -3 points) 's comment on White Lies by (
I think this just begs the question:
Ah, but the tortoise would argue that this isn’t enough. Sure, the belief pool may contain “X is fuzzle,” and this dynamic, but that doesn’t mean that X necessarily gets sent to the action system. In addition, you need another dynamic:Dynamic 2: When the belief pool contains “X is fuzzle”, and there is a dynamic saying “When the belief pool contains ‘X is fuzzle’, send X to the action system”, then send X to the action system.
Or, to put it another way:
Dynamic 2: When the belief pool contains “X is fuzzle”, run Dynamic 1.
Of course, then one needs Dynamic 3 to tell you to run Dynamic 2, ad infinitum—and we’re back to the original problem.
I think the real point of the dialogue is that you can’t use rules of inference to derive rules of inference—even if you add them as axioms! In some sense, then, rules of inference are even more fundamental than axioms: they’re the machines that you feed the axioms into. Then one naturally starts to ask questions about how you can “program” the machines by feeding in certain kinds of axioms, and what happens if you try to feed a program’s description to itself, various paradoxes of self-reference, etc. This is where the connection to Gödel and Turing comes in—and probably why Hofstadter included this fable.
Cheers, Ari
Ari, dynamics don’t say things; they do things.
A non-universal Turing machine can’t simulate a universal Turing machine. (If it could, it would be universal after all—a contradiction.) In other words, there are computers that can self-program and those that can’t, and no amount of programming can change the latter into the former.
Cheers, Ari
Well, at least I can’t be accused of belaboring a point so obvious that no one could possibly get it wrong.
Within our
anything can influence anything'' (more or less) physics, the distinction between communicating the proposition and just physically
setting in motion″ is not clear-cut. Programmable mind can assume the dynamics that is encoded in some weak signals, a rock can also assume different dynamics, but you’ll have to build a machine from it first, applying more than weak signals.I think the moral is that you shouldn’t try to write software for which you don’t have the hardware to run on, not even if the code could run itself by emulating the hardware. A rock runs on physics, Euclid’s rules don’t. We have morality to run on our brains, and… isn’t FAI about porting it to physics?
So shouldn’t we distinguish between the symbols physics::dynamic and human_brain::dynamic? (In a way, me reading the word “dynamic” uses more computing power than running any Java applet could on current computers...)
This is why it’s always seemed to silly to me to try to axiomitize logic. Either you already “implement” logic, in which case it’s unneccessary, or you don’t, in which case you’re a rock and there’s no point in dealing with you.
I think this also has deeper implications for the philosophy of math—the desire to fully axiomitize is still deeply ingrained despite Goedel, but in some ways this seems like a more fundamental challenge. You can write down as many rules as you want for string manipulation, but the realization of those rules in actual manipulation remains ineffable on paper.
Axiomatizing logic isn’t to make us implement logic in the first place!
It’s to enable us to store and communicate logic.
I wouldn’t describe any typical human mind as implementing logic. Even those that are logical don’t seem to think that way naturally or innately. But particular human minds have had much success thinking with ‘axiomitized’ logic.
Isn’t a silicon chip technically a rock?
Also, I take it that this means you don’t believe in the whole, “if a program implements consciousness, then it must be conscious while sitting passively on the hard disk” thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.
Rocks are naturally formed. It’s not physically impossible for natural processes to form silicon into a working computer, but it’s certainly not likely.
I used that as an argument against timeless physics: If you could have consciousness in a timeless universe, than this means that you could simulate a conscious being without actually running the simulation, you could just put the data on the hard drive. I’m still waiting out for an answer on that one!
In order for it to be analogous, you’d have to put the contents of the memory for every step of the program as its running on the hard drive. The program itself isn’t sufficient.
Since there’s no way to get the memory every step without actually running the program, it doesn’t seem that paradoxical.
Also, if time was an explicit dimension, that would just mean that the results of the program are spread out on a straight line aligned along the t-axis. I don’t see why making it a curvy line makes it any different.
Huh? A “timeless universe” still contains ‘time’; it’s just not fundamental. Consciousness may be a lot of things, but it’s definitely not static in ‘time’, i.e. it’s dynamic with respect to causality.
IL, isn’t the difference the presence or absence of causality?
“And even if you have a mind that does carry out modus ponens, it is futile for it to have such beliefs as… (A) If a toddler is on the train tracks, then pulling them off is fuzzle. (B) There is a toddler on the train tracks. …unless the mind also implements: Dynamic: When the belief pool contains “X is fuzzle”, send X to the action system.”
It seems to me that much of the frustration in my life prior to a few years ago has been due to thinking that all other human minds necessarily and consistently implement modus ponens and the Dynamic: “When the belief pool contains “X is right/desired/maximizing-my-utility-function/good”, send X to action system”
These days my thoughts are largely occupied with considering what causal dynamic could cause modus poens and the above Dynamic to be implemented in a human mind.
IL: Timeless physics retains causality. Change some of the data on the hard drive and the other data won’t change as an inferential result. There are unsolved issues in this domain, but probably not easy ones. The process of creating the data on the hard drive might be necessarily conscious, for instance, or might not. I think that this was discussed earlier when we discussed giant look-up tables.
This is so true
You can fully describe the mind/brain in terms of dynamics without reference to logic or data. But you can’t do the reverse. I maintain that the dynamics are all that matters and the rest is just folk theory tarted up with a bad analogy (computationalism).
“Fuzzle” = “Morally right.”
Only in terms of how this actually gets into a human mind, there is a dynamic first: before anyone has any idea of fuzzleness, things are already being sent to the action system. Then we say, “Oh, these are things are fuzzle!”, i.e. these are the type of things that get sent to the action system. Then someone else tells us that something else is fuzzle, and right away it gets sent to the action system too.
“Fuzzle” = “Morally right.”
Hm… As described, “fuzzle” = “chosen course of action”, or, “I choose”. Things labelled “fuzzle” are sent to the action system—this is all we’re told about “fuzzle”. But anything and everything that a system decides, chooses, sets out, to do, are sent to the action system. Not just moral things.
If we want to distinguish moral things from actions in general, we need to say more.
I just want to note that back in 2008, even though I had already read this dialogue and thought I understood it, this was one of Eliezer’s posts that made me go: “Holy shit, I didn’t realize it was possible to think this clearly.”
Going down to the bottom of the post for the TL;DR, I was pleasantly surprised to having the need to go back up again.
Minor note- When trying to prove Strong Foundationalism (on which I have since given up), I came up with the idea of founding logic not on something anybody must accept but something that must be true in any possible universe. (E.g 1+1=2 according to traditional logic, reductionism if I understand Elizier correctly). This gets around the tortoise’s problem and reestablishes logic.
Of course, this isn’t so relevant because the tortoise can in response suggest the possibility Achilles is insane either in his reasoning or his memory (or both, but that’s superflous) being so far off-track that he can’t trust them to perform proper reasoning.