Truly Part Of You
A classic paper by Drew McDermott, “Artificial Intelligence Meets Natural Stupidity,” criticized AI programs that would try to represent notions like happiness is a state of mind using a semantic network:
And of course there’s nothing inside the HAPPINESS node; it’s just a naked LISP token with a suggestive English name.
So, McDermott says, “A good test for the disciplined programmer is to try using gensyms in key places and see if he still admires his system. For example, if STATE-OF-MIND is renamed G1073. . .” then we would have IS-A(HAPPINESS, G1073) “which looks much more dubious.”
Or as I would slightly rephrase the idea: If you substituted randomized symbols for all the suggestive English names, you would be completely unable to figure out what G1071(G1072, G1073) meant. Was the AI program meant to represent hamburgers? Apples? Happiness? Who knows? If you delete the suggestive English names, they don’t grow back.
Suppose a physicist tells you that “Light is waves,” and you believe the physicist. You now have a little network in your head that says:
IS-A(LIGHT, WAVES)
As McDermott says, “The whole problem is getting the hearer to notice what it has been told. Not ‘understand,’ but ‘notice.’ ” Suppose that instead the physicist told you, “Light is made of little curvy things.”1 Would you notice any difference of anticipated experience?
How can you realize that you shouldn’t trust your seeming knowledge that “light is waves”? One test you could apply is asking, “Could I regenerate his knowledge if it were somehow deleted from my mind?”
This is similar in spirit to scrambling the names of suggestively named lisp tokens in your AI program, and seeing if someone else can figure out what they allegedly “refer” to. It’s also similar in spirit to observing that an Artificial Arithmetician programmed to record and play back
Plus-Of(Seven, Six) = Thirteen
can’t regenerate the knowledge if you delete it from memory, until another human re-enters it in the database. Just as if you forgot that “light is waves,” you couldn’t get back the knowledge except the same way you got the knowledge to begin with—by asking a physicist. You couldn’t generate the knowledge for yourself, the way that physicists originally generated it.
The same experiences that lead us to formulate a belief, connect that belief to other knowledge and sensory input and motor output. If you see a beaver chewing a log, then you know what this thing-that-chews-through-logs looks like, and you will be able to recognize it on future occasions whether it is called a “beaver” or not. But if you acquire your beliefs about beavers by someone else telling you facts about “beavers,” you may not be able to recognize a beaver when you see one.
This is the terrible danger of trying to tell an artificial intelligence facts that it could not learn for itself. It is also the terrible danger of trying to tell someone about physics that they cannot verify for themselves. For what physicists mean by “wave” is not “little squiggly thing” but a purely mathematical concept.
As Donald Davidson observes, if you believe that “beavers” live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false. Your belief about “beavers” is not right enough to be wrong.2 If you don’t have enough experience to regenerate beliefs when they are deleted, then do you have enough experience to connect that belief to anything at all? Wittgenstein: “A wheel that can be turned though nothing else moves with it, is not part of the mechanism.”
Almost as soon as I started reading about AI—even before I read McDermott—I realized it would be a really good idea to always ask myself: “How would I regenerate this knowledge if it were deleted from my mind?”
The deeper the deletion, the stricter the test. If all proofs of the Pythagorean Theorem were deleted from my mind, could I re-prove it? I think so. If all knowledge of the Pythagorean Theorem were deleted from my mind, would I notice the Pythagorean Theorem to re-prove? That’s harder to boast, without putting it to the test; but if you handed me a right triangle with sides of length 3 and 4, and told me that the length of the hypotenuse was calculable, I think I would be able to calculate it, if I still knew all the rest of my math.
What about the notion of mathematical proof? If no one had ever told it to me, would I be able to reinvent that on the basis of other beliefs I possess? There was a time when humanity did not have such a concept. Someone must have invented it. What was it that they noticed? Would I notice if I saw something equally novel and equally important? Would I be able to think that far outside the box?
How much of your knowledge could you regenerate? From how deep a deletion? It’s not just a test to cast out insufficiently connected beliefs. It’s a way of absorbing a fountain of knowledge, not just one fact.
A shepherd builds a counting system that works by throwing a pebble into a bucket whenever a sheep leaves the fold, and taking a pebble out whenever a sheep returns. If you, the apprentice, do not understand this system—if it is magic that works for no apparent reason—then you will not know what to do if you accidentally drop an extra pebble into the bucket. That which you cannot make yourself, you cannot remake when the situation calls for it. You cannot go back to the source, tweak one of the parameter settings, and regenerate the output, without the source. If “two plus four equals six” is a brute fact unto you, and then one of the elements changes to “five,” how are you to know that “two plus five equals seven” when you were simply told that “two plus four equals six”?
If you see a small plant that drops a seed whenever a bird passes it, it will not occur to you that you can use this plant to partially automate the sheep-counter. Though you learned something that the original maker would use to improve on their invention, you can’t go back to the source and re-create it.
When you contain the source of a thought, that thought can change along with you as you acquire new knowledge and new skills. When you contain the source of a thought, it becomes truly a part of you and grows along with you.
Strive to make yourself the source of every thought worth thinking. If the thought originally came from outside, make sure it comes from inside as well. Continually ask yourself: “How would I regenerate the thought if it were deleted?” When you have an answer, imagine that knowledge being deleted as well. And when you find a fountain, see what else it can pour.
1 Not true, by the way.
2 Richard Rorty, “Out of the Matrix: How the Late Philosopher Donald Davidson Showed That Reality Can’t Be an Illusion,” The Boston Globe, 2003, http://archive.boston.com/news/globe/ideas/articles/2003/10/05/out_ of_ the_ matrix/.
- Comment reply: my low-quality thoughts on why CFAR didn’t get farther with a “real/efficacious art of rationality” by 9 Jun 2022 2:12 UTC; 260 points) (
- Learned Blankness by 18 Apr 2011 18:55 UTC; 260 points) (
- Gears in understanding by 12 May 2017 0:36 UTC; 193 points) (
- Lost Purposes by 25 Nov 2007 9:01 UTC; 184 points) (
- Introduction to Cartesian Frames by 22 Oct 2020 13:00 UTC; 155 points) (
- Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc by 4 Jun 2022 5:41 UTC; 150 points) (
- Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning by 7 Jun 2020 7:52 UTC; 132 points) (
- The genie knows, but doesn’t care by 6 Sep 2013 6:42 UTC; 119 points) (
- Understanding your understanding by 22 Mar 2010 22:33 UTC; 102 points) (
- Interpreting Yudkowsky on Deep vs Shallow Knowledge by 5 Dec 2021 17:32 UTC; 100 points) (
- Zetetic explanation by 27 Aug 2018 0:12 UTC; 95 points) (
- Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe by 10 Apr 2022 21:02 UTC; 92 points) (
- You’re Entitled to Arguments, But Not (That Particular) Proof by 15 Feb 2010 7:58 UTC; 88 points) (
- Toward a New Technical Explanation of Technical Explanation by 16 Feb 2018 0:44 UTC; 86 points) (
- Detached Lever Fallacy by 31 Jul 2008 18:57 UTC; 85 points) (
- From First Principles by 27 Sep 2012 19:04 UTC; 79 points) (
- GAZP vs. GLUT by 7 Apr 2008 1:51 UTC; 78 points) (
- Paper reading as a Cargo Cult by 7 Aug 2022 7:50 UTC; 70 points) (
- Fake Utility Functions by 6 Dec 2007 16:55 UTC; 69 points) (
- Diplomacy as a Game Theory Laboratory by 12 Nov 2010 22:19 UTC; 69 points) (
- Abstracting The Hardness of Alignment: Unbounded Atomic Optimization by 29 Jul 2022 18:59 UTC; 68 points) (
- The Sense Of Physical Necessity: A Naturalism Demo (Introduction) by 24 Feb 2024 2:56 UTC; 59 points) (
- Reality is weirdly normal by 25 Aug 2013 19:29 UTC; 55 points) (
- Selfishness, preference falsification, and AI alignment by 28 Oct 2021 0:16 UTC; 52 points) (
- CFAR’s new focus, and AI Safety by 3 Dec 2016 18:09 UTC; 51 points) (
- Is “gears-level” just a synonym for “mechanistic”? by 13 Dec 2021 4:11 UTC; 48 points) (
- Living By Your Own Strength by 22 Dec 2008 0:37 UTC; 45 points) (
- On Being Robust by 10 Jan 2020 3:51 UTC; 45 points) (
- The Problem with AIXI by 18 Mar 2014 1:55 UTC; 44 points) (
- A Study of Scarlet: The Conscious Mental Graph by 27 May 2011 20:13 UTC; 44 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 42 points) (
- What makes teaching math special by 17 Dec 2023 14:15 UTC; 41 points) (
- A Call for Constant Vigilance by 3 Apr 2013 9:52 UTC; 40 points) (
- Learning and manipulating learning by 19 May 2020 13:02 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- 24 Jul 2014 11:27 UTC; 36 points) 's comment on Jokes Thread by (
- “Which chains-of-thought was that faster than?” by 22 May 2024 8:21 UTC; 33 points) (
- What experiments would demonstrate “upper limits of augmented working memory?” by 15 Aug 2019 22:09 UTC; 33 points) (
- Better thinking through experiential games by 23 Oct 2009 12:59 UTC; 32 points) (
- Babble challenge: 50 ways of hiding Einstein’s pen for fifty years by 15 Oct 2020 7:23 UTC; 31 points) (
- Let Your Workers Gather Food by 24 Oct 2011 19:51 UTC; 29 points) (
- Right for the Wrong Reasons by 24 Jan 2013 0:02 UTC; 28 points) (
- Building Something Smarter by 2 Nov 2008 17:00 UTC; 26 points) (
- Learning Values in Practice by 20 Jul 2020 18:38 UTC; 24 points) (
- The Rhythm of Disagreement by 1 Jun 2008 20:18 UTC; 24 points) (
- Fundamental Philosophical Problems Inherent in AI discourse by 16 Sep 2018 21:03 UTC; 23 points) (
- Hedonium’s semantic problem by 9 Apr 2015 11:50 UTC; 22 points) (
- Principles of Disagreement by 2 Jun 2008 7:04 UTC; 20 points) (
- 2 Feb 2013 8:01 UTC; 17 points) 's comment on S.E.A.R.L.E’s COBOL room by (
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 17 points) (
- Map and Territory: Summary and Thoughts by 5 Dec 2020 8:21 UTC; 16 points) (
- Rationality Reading Group: Part D: Mysterious Answers by 2 Jul 2015 1:55 UTC; 15 points) (
- 10 Jan 2011 11:01 UTC; 15 points) 's comment on A cautionary note about “Bayesianism” by (
- 18 Mar 2012 4:10 UTC; 14 points) 's comment on The Futility of Intelligence by (
- 2 Sep 2018 21:05 UTC; 14 points) 's comment on Open Thread September 2018 by (
- 25 Nov 2008 21:03 UTC; 11 points) 's comment on ...Recursion, Magic by (
- 19 Nov 2010 17:31 UTC; 10 points) 's comment on Yes, a blog. by (
- 16 Sep 2010 17:15 UTC; 10 points) 's comment on Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality by (
- 27 Aug 2018 18:28 UTC; 9 points) 's comment on Zetetic explanation by (
- [SEQ RERUN] Truly Part of You by 2 Nov 2011 3:30 UTC; 8 points) (
- 1 Nov 2011 10:05 UTC; 8 points) 's comment on Great Explanations by (
- 17 Dec 2023 19:46 UTC; 8 points) 's comment on Lessons from massaging myself, others, dogs, and cats by (
- 12 Jan 2017 11:13 UTC; 8 points) 's comment on Open thread, Jan. 09 - Jan. 15, 2017 by (
- 5 May 2022 14:21 UTC; 7 points) 's comment on Ulisse Mini’s Shortform by (
- 10 Aug 2010 2:59 UTC; 7 points) 's comment on Five-minute rationality techniques by (
- 25 Nov 2017 20:48 UTC; 7 points) 's comment on Open Letter to MIRI + Tons of Interesting Discussion by (
- 20 Oct 2019 23:28 UTC; 7 points) 's comment on We tend to forget complicated things by (
- 21 Oct 2019 0:44 UTC; 7 points) 's comment on We tend to forget complicated things by (
- 12 Nov 2009 0:06 UTC; 7 points) 's comment on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions by (
- 18 May 2013 17:45 UTC; 6 points) 's comment on The flawed Turing test: language, understanding, and partial p-zombies by (
- 26 Aug 2009 23:35 UTC; 6 points) 's comment on A Rationalist’s Bookshelf: The Mind’s I (Douglas Hofstadter and Daniel Dennett, 1981) by (
- 27 May 2011 13:59 UTC; 6 points) 's comment on The 48 Rules of Power; Viable? by (
- 12 Mar 2010 9:10 UTC; 5 points) 's comment on Open Thread: March 2010 by (
- 8 Aug 2012 22:42 UTC; 5 points) 's comment on Friendly AI and the limits of computational epistemology by (
- 19 Sep 2010 3:12 UTC; 5 points) 's comment on Open Thread, September, 2010-- part 2 by (
- 13 Jul 2009 14:31 UTC; 5 points) 's comment on Recommended reading for new rationalists by (
- 8 Feb 2012 14:02 UTC; 4 points) 's comment on Welcome to Less Wrong! (2012) by (
- 18 Jun 2015 8:23 UTC; 4 points) 's comment on In praise of gullibility? by (
- 15 Jun 2010 13:05 UTC; 4 points) 's comment on Open Thread June 2010, Part 3 by (
- 8 Sep 2009 15:53 UTC; 4 points) 's comment on Why I’m Staying On Bloggingheads.tv by (
- 13 Mar 2024 22:25 UTC; 4 points) 's comment on Update on Developing an Ethics Calculator to Align an AGI to by (
- 21 Jan 2022 7:20 UTC; 4 points) 's comment on Open Thread—Jan 2022 [Vote Experiment!] by (
- Thoughts on the frame problem and moral symbol grounding by 11 Mar 2013 16:18 UTC; 3 points) (
- 17 Sep 2012 13:17 UTC; 3 points) 's comment on Eliezer’s Sequences and Mainstream Academia by (
- 22 Aug 2023 8:50 UTC; 3 points) 's comment on Ten variations on red-pill-blue-pill by (
- 21 Feb 2010 6:06 UTC; 3 points) 's comment on Open Thread: February 2010, part 2 by (
- 16 May 2012 14:45 UTC; 3 points) 's comment on Open Thread, May 16-31, 2012 by (
- 4 Jan 2013 3:42 UTC; 3 points) 's comment on Politics Discussion Thread January 2013 by (
- 4 Feb 2010 0:20 UTC; 3 points) 's comment on Rationality Quotes: February 2010 by (
- 20 Feb 2022 20:35 UTC; 3 points) 's comment on 12 interesting things I learned studying the discovery of nature’s laws by (
- 11 Nov 2019 4:05 UTC; 3 points) 's comment on I would like to try double crux. by (
- 28 Jan 2010 20:06 UTC; 2 points) 's comment on Bizarre Illusions by (
- 9 Jun 2010 20:52 UTC; 2 points) 's comment on Understanding your understanding by (
- 27 Feb 2010 1:09 UTC; 2 points) 's comment on What is Bayesianism? by (
- 15 May 2017 23:18 UTC; 2 points) 's comment on Gears in understanding by (
- 29 Jan 2013 22:32 UTC; 2 points) 's comment on Rationality Quotes January 2013 by (
- Wittgenstein and Word2vec: Capturing Relational Meaning in Language and Thought by 28 Jul 2024 19:55 UTC; 2 points) (
- 28 Apr 2010 16:13 UTC; 2 points) 's comment on Possibilities for converting useless fun into utility in Online Gaming by (
- 1 Feb 2016 9:50 UTC; 2 points) 's comment on Learning Mathematics in Context by (
- 23 Sep 2015 3:57 UTC; 2 points) 's comment on Original Seeing by (
- 30 May 2019 13:06 UTC; 2 points) 's comment on Drowning children are rare by (
- 14 Jun 2011 4:57 UTC; 2 points) 's comment on Rewriting the sequences? by (
- 2 Nov 2012 16:35 UTC; 1 point) 's comment on Logical Pinpointing by (
- 7 Aug 2011 13:31 UTC; 1 point) 's comment on Beware of Other-Optimizing by (
- David Deutsch on Universal Explainers and AI by 24 Sep 2020 7:50 UTC; 1 point) (
- 11 Nov 2011 21:58 UTC; 1 point) 's comment on Which fields of learning have clarified your thinking? How and why? by (
- 20 Jul 2011 2:33 UTC; 1 point) 's comment on Guessing the Teacher’s Password by (
- Depth-based supercontroller objectives, take 2 by 24 Sep 2014 1:25 UTC; 1 point) (
- 17 Feb 2010 23:03 UTC; 1 point) 's comment on Open Thread: February 2010, part 2 by (
- 20 Mar 2011 16:38 UTC; 1 point) 's comment on What is wrong with mathematics education? by (
- 28 Aug 2011 6:26 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 6 Jul 2011 15:56 UTC; 1 point) 's comment on Best articles to link to when introducing someone to Less Wrong? by (
- 1 Jul 2010 20:00 UTC; 1 point) 's comment on What Cost for Irrationality? by (
- 14 Jan 2022 12:06 UTC; 1 point) 's comment on Open Thread—Jan 2022 [Vote Experiment!] by (
- 15 Aug 2009 18:14 UTC; 1 point) 's comment on Minds that make optimal use of small amounts of sensory data by (
- 27 Nov 2017 16:43 UTC; 0 points) 's comment on You Have the Right to Think by (
- 17 Mar 2015 1:08 UTC; 0 points) 's comment on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 by (
- 18 Jan 2012 2:44 UTC; 0 points) 's comment on Any sufficiently advanced wisdom is indistinguishable from bullshit by (
- 15 Jun 2017 11:08 UTC; 0 points) 's comment on Bring up Genius by (
- 27 Feb 2012 4:43 UTC; 0 points) 's comment on Yet another safe oracle AI proposal by (
- 24 Jan 2014 0:45 UTC; 0 points) 's comment on Magical Categories by (
- 20 Mar 2010 23:32 UTC; 0 points) 's comment on “Life Experience” as a Conversation-Halter by (
- 7 Feb 2011 19:43 UTC; 0 points) 's comment on SUGGEST and VOTE: Posts We Want to Read on Less Wrong by (
- 24 Mar 2015 11:12 UTC; 0 points) 's comment on Learning by Doing by (
- 4 Nov 2010 18:18 UTC; 0 points) 's comment on Levels of Intelligence by (
- 13 Jul 2012 22:34 UTC; 0 points) 's comment on Useful maxims by (
- 27 Feb 2012 19:04 UTC; 0 points) 's comment on Online education and Conscientiousness by (
- 11 Mar 2010 22:01 UTC; 0 points) 's comment on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity’s Future by (
- Does this algorithm experience pleasure and suffering when run? by 17 Apr 2023 2:29 UTC; -8 points) (
I make it a habit to learn as little as possible by rote, and just derive what I need when I need it. This means my knowledge is already heavily compressed, so if you start plucking out pieces of it at random, it becomes unrecoverable fairly quickly. As near as I can tell, my knowledge rarely vanishes for no good reason, though, so I have not really found this to be a handicap.
...age will eventually remedy that. ;)
I don’t think you’ve understood the article. The idea of the article is that if you’re able to derive it, then yes, you can regenerate it. That’s what ‘regenerate’ means.
I think nominul does understand it, and at one level higher than you do. he understands the principle so well he goes and makes a tradeoff in terms of memory used vs execution time.
Take a symetric matrix with a conveniently zero’d out diagonal… you could go and memorize every element on the matrix....(no understanding, pure rote memorization).… you could go and memorize every element AND noticing it happens to be symmetric...(understanding, what you seem to be thinking of...) Or noticing it happens to be symmetric and then only memorizing half the entries in the first place(nominull’s approach).
I go with nominull’s approach myself...I’m just a lot sloppier about selecting what info to rote memorize.
My interpretation: if your brain can regenerate lost information from its neighbors, but you don’t actually need that, then you have an inefficient information packing system. You can improve the situation by compressing more until you can’t regenerate lost information.
However, I have some doubts about this. Deep knowledge seems to be about the connections between ideas, and I don’t think you can significantly decrease information regeneration without removing the interconnections.
My experience is that some people have an easier time memorizing by rote than others. Not all brains are wired the same. Personally I learn relations and concepts much easier and quicker than facts. But that may not be the case for everybody. It might not even be advantageous for everybody—at least not in the ancestral environment which had much less easily detectable structure than our well-structured world.
Reminds me of the time that my daughter asked me how to solve a polynomial equation. Many moons removed from basic algebra I had to start from scratch and quickly ended up with the quadratic equation without realizing where I was going until the end. It was a satisfying experience although there’s no way to tell how much the work was guided by faint memories.
Having recently reverse-engineered the quadratic equation, it involves quite a few steps that would be pretty tricky to capture without a lot of time and patience, a very good intuition for algebra, or a decent guiding hand from past memories. Given how much of the structure I can recall from memory, the latter seems most likely, but it’s provably doable without knowing it in the first place, so I won’t dismiss that possibility :)
It’s not too hard if you remember that you can get it from completing the square. Or of course you could use calculus.
A valuable method of learning math is to start at the beginning of recorded history and read the math-related texts that were produced by the people who made important contributions to the progression of mathematical understanding.
By the time you get to Newton, you understand the basic concepts of everything and where it all comes from much better than if you had just seen them in a textbook or heard a lecture.
Of course, speaking from experience, reading page after page of Euclid’s proofs can be exhausting to continue to pay enough mental attention to actual understand them before moving on to the next one. :)
Still, it does help tremendously to be able to place the knowledge in the mental context of people who actually needed and made the advances.
I believe that this is how St. John’s College teaches math (and everything else). They only use primary texts. If anyone is interested in this approach, give them a look.
Sorry. I didn’t see the comment immediately below this one.
@Sharper: There’s actually a school that teaches math (and other things) that way, St John’s College in the US (http://en.wikipedia.org/wiki/St._John’s_College%2C_U.S). Fascinating place.
I make it a habit to learn as little as possible by rote, and just derive what I need when I need it. This means my knowledge is already heavily compressed, so if you start plucking out pieces of it at random, it becomes unrecoverable fairly quickly.
This is why I find learning a foreign language to be extremely difficult. There’s no way to derive the word for “desk” in another language from anything other than the word itself. There’s no algorithm for an English-Spanish dictionary that’s significantly simpler than a huge lookup table. (There’s a reason it takes babies years to learn to talk!)
I had a similar complaint, and the need to memorise a great quantity of seemingly arbitrary facts put me off learning languages and to a lesser extent history. Interestingly it seems easier to learn words from context and use for that reason, you can regenerate the knowledge from a memory of how and when it is used. I am also told that once you know multiple languages it becomes possible to infer from relations between them, which is perhaps why latin is still considered useful.
I find that it helps to think of learning a foreign language as conducting a massive chosen-plaintext attack on encrypted communications, in which you can use differential analysis and observed regularities to make educated guesses about unknown ciphertexts.
My ability to learn history improved greatly when I stopped perceiving it as “A random collection of facts I have to memorize” and started noticing the regularities that link things together. Knowing that World War II was fought amongst major world powers around 1942 lets you infer that it was fought using automobiles and aeroplanes, and knowing that the American Revolution was fought in the late 1700s lets you infer the opposite, even if you don’t know anything else specific about the wars.
True, you can derive new information from previously learned information. But patterns like ‘there were no cars in the american revolution’ aren’t going to score you anything or get radically new information. And theres no way to derive a lot of the information.
I make it a habit to learn as little as possible by rote, and just derive what I need when I need it.
Do realize that you’re trading efficiency (as in speed of access in normal use) for that space saving in your brain. Memorizing stuff allows you to move on and save your mental deducing cycles for really new stuff.
Back when I was memorizing the multiplication tables, I noticed that
9 x N = 10 x (N-1) + (9 - (N-1))
That is, 9 x 8 = 70 + 2
So, I never memorized the 9′s the same way I did all the other single digit multiplications. To this day I’m slightly slower doing math with the digit 9. The space/effort saving was worth it when I was 8 years old, but definitely not today.
There were actually a few times (in my elementary school education) when I didn’t understand why certain techniques that the teacher taught were supposed to be helpful (for reasons which I only recently figured out). The problem of subtracting 8 from 35 would be simplified as such;
35 − 8 = 20 + (15 − 8)
I never quite got why this made the problem “easier” to solve, until, looking back recently, I realized that I was supposed to have MEMORIZED “15 − 8 = 7!”
At the time, I simplified it to this, instead. 35 − 8 = 30 + (5 − 8) = 20 + 10 + (-3) = 27, or, after some improvement, 35 − 8 = 30 - (8 − 5) = 30 − 3 = 20 + 10 − 3 = 27.
Evidently, I was happier using negative numbers than I was memorizing the part of the subtraction table where I need to subtract one digit numbers from two digit numbers.
I hated memorization.
Wow. I’ve never even conceived of this (on it’s own or) as a simplification.
My entire life has been the latter simplification method.
I have a similar way, which i find simpler:
9N=10N-N
That is, 9 8=10 8-8
I always do my 9x multiplications like this! We were taught this, though. I can’t say I figured it out on my own.
I learned my nines like that too, except I think the teacher showed us that trick. Of the things I learned personally… My tricks were more about avoiding the numbers I didn’t like than being efficient.
I could only ever remember how to add 8 to a number by adding ten and then subtracting two. I learned my 8 times tables by doubling the 4th multiple, and 7 by subtracting the base number from that. I suppose I only ever really memorized 2-6 and 12.
Knowing how to regenerate knowledge does not mean that you only store the information in it seed/compressed form. However if you need the room for new information you can do away with the flat storage and keep the seed form, knowing that you can regenerate it at will.
I sure wish I could choose what gets deleted from memory that easily.
In my experience it is just a matter of not using the memory/skill/knowledge. I was not trying to imply it was a quick process.
So, what about the notion of mathematical proof? Anyone want to give a shot at explaining how that can be regenerated?
I doubt this is feasible to regenerate from scratch, because I don’t think anyone ever generated it from scratch. Euclid’s Elements were probably the first rigorous proofs, but Euclid built on earlier, less-rigorous ideas which we would recognize now as invalid as proofs but better than a broad heuristic argument.
And of course, Euclid’s notion of proof wasn’t as rigorous as Russell and Whitehead’s.
If you still have the corresponding axioms, it should be pretty trivial to rebuild the idea of “combine these rules together to create significantly more complex rules”, and then perhaps to relabel things in to “axioms” and “proofs”. Leave a kid with a box of Legos and ey’ll tend to build something, so the basic combination of “build by combination” seems pretty innate :)
If you’ve lost he explicit idea of axioms, but still have algebra, then you can get basic algebraic proofs, like 10X = 9X + 1X. If you play around from there, you should be able to come up with, and eventually prove, a few generalizations, and eventually you’ll have a decent set of axioms. I’d expect you’d probably take a while to develop all of them.
Dynamically_Linked: On that one, I’m having a hard time understanding what exactly is being regenerated. If it’s just a matter of “systematizing the process of deducing from assumptions”, then it doesn’t sound hard. The question is just—what knowledge do I have before that, on which I’m supposed to come up with the concept? What’s the “the sides of this triangle are 3 and 4 and this angle is right, and the hypotenuse is calculable”?
Very good post—I think it’d be helpful to have a series of examples of knowledge being regenerated. Then people could really get your idea and use it.
Life is full of contradictions. Your boss wants you to work more, you want to spend more time with your family. On the one hand you need the salary to support your family and on the other hand you need a private life to enjoy yourself, recharge and be ready again to work some more. Do you work to live, or do you live to work? Can the question even be answered with a simple ‘yes’ or ‘no’? Assuming you do not live to work—then why do you work? And the other way around: if you do not work to live, then why do you live? That is a contradiction.
But life is not a matter of yes or no questions. Or is life a matter of yes and no questions? This is a clear ‘yes or no question’ and clearly a matter concerning life. Assuming life is, then it would not be a matter of yes or no questions and the statement ‘Life is not a matter of yes or no questions’ would be false, assuming on the other hand that life is a matter of yes and no questions then the statement would be false as well. No matter how you approach it the statement is always false but you nevertheless agree with it. Another contradiction—how can this be?
The answer is of course the middle ground. You do not only work just to life and you do not only life just to work. Being the smart person that you are you look at you options, understand the consequences and strike a compromise. Work some so you can life some so you can work some more… A part of your salary is flowing back into your next salary by allowing you to recharge and a part of your life supported by your salary is the cause that lets you recharge in order to earn more salary. It is a recursive self referencing feedback loop—like a Moebius snail.
How to understand this recursive self-referencing feedback loop—let us call it the Moebius effect—to know what you have to do, is what I want to help you realize.
Those “meaningless” tokens aren’t only used in one place, however. If you had a bunch of other facts including the tokens involved, like “waves produce interference patterns when they interact” and “light produces interference patterns when it interacts”, then you can regenerate “light is waves” if it is lost.
Similarly, while “happiness is a state of mind” is not enough to define happiness, a lot of other facts about it certainly would. The fact that it is a state of mind would also let us apply facts we know about states of mind, giving us even more information about happiness.
Part of the fun of the Contact Project is trying to interpret a message that has been fully gensymmed.
I’ve always been intimidated by this. I’m quite positive I couldn’t regenerate the Pythagorian Theorem, but I know that I should be able to. I certainly wouldn’t be able to figure out basic calculus on my own. I wish that I could, but I know that I wouldn’t be able to. Are there any things we’ve learned from mathematicians in the past that make figuring out such things easier? Anything I can learn to make learning easier?
Well, if you like reading things, I know of one extremely good book about the different methods and heuristics that are useful in problem-solving: George Polya’s How to Solve It. I strongly recommend it. Hell, I’ll mail it to you if you like.
However, it feels to me personally that every single drop of the problem-solving and figuring-things-out ability I have comes purely from active experience solving problems and figuring things out, and not from reading books.
Well, here’s my background:
I taught myself math from Algebra to Calculus (by “taught myself” I mean went through the Saxxon Math books and learned everything without a teacher, except for the few times when I really didn’t understand something, when I would go to a math teacher and ask).
I made sure I tried to understand every single proof I read. I found that when I understood the proofs of why things worked, I would always know how to solve the problems. However, I remember thinking, every time I came across a new proof, that I wouldn’t have been able to come up with it on my own, without someone teaching it to me. Or, at least, I may have been able to come up with one or two by accident, as a byproduct of something I was working on, but I really don’t think I’d be able to sit down and try to figure out the differentiation, for example, on purpose, if someone asked me to figure out a method to find the slope of a function.
That’s what I meant when I said that I’m intimidated by this. It’s not impossible that I wouldn’t ever figure out one of the theorems on accident, by working on something else, I just can’t see myself sitting down to figure out the basic theorems of mathematics. If you think it’ll help, I’ll have to pick up “How to Solve It” from a library. Thanks for the advice!
One true thing that might be applicable: Usually math textbooks have ‘neat’ proofs. That is, proofs that, after being discovered (often quite some time ago) where cleaned up repeatedly, removing the previous (intuitive) abstractions and adding abstractions that allow for simpler proofs (sometimes easier to understand, sometimes just shorter)
Rather than trying to prove a theorem straight, a good intermediary step is to try to find some particular case that makes sense. Say, instead of proving the formula for the infinite sum of geometric progressions, try the infinite sum of the progression 1, 1⁄2, 1⁄4. Instead of proving a theorem for all integers, it it easier for powers of two ?
Also, you can try the “dual problem”. Try to violate the theorem. What is holding you back ?
The Pythagorean Theorem is just a special case of the magnitude of a vector, aka the Euclidean Norm#Euclidean_norm). Though, I wouldn’t be able to derive that if that were deleted from my brain.
Gold for this.
I feel really stupid after reading this, so thanks a lot for shedding light onto the vast canvas of my ignorance.
I have almost no idea which of the spinning gears in my head I could regrow on my own. I’m close to being mathematically illiterate, due to bad teaching and a what appears to be a personal aversion or slight inability—so I may have come up with the bucket plus pebble method and perhaps with addition, substraction, division and possibly multiplication—but other than that I’d be lost. I’d probably never conceive of the idea of a tidy decimal system, or that it may be helpful to keep track of the number zero.
Non-mathematical concepts on the other hand may be easier to regrow in some instances. Atheism for example seems easy to regrow if you merely have decent people-intuition, a certain willingness to go against the grain (or at least think against the grain), plus a deeply rooted aversion against hypocricy. Once you notice how full of s*it people are (and notice that you yourself seem to share their tendencies) it’s a fairly small leap of (non)faith, which would explain why so many people seem to arrive at atheism all due to their own observations and reasoning.
I think I could also regrow the concept of evolution if I spent enough time around different animals to notice their similarities and if I was familiar with animal breeding—but it may realistically take at least a decade of being genuinely puzzled about their origin and relation to one another (without giving in to the temptation of employing a curiosity stopper needless to say). Also, having a rough concept of how incredibly old the earth is and that even landscapes and mountains shift their shape over time would have helped immensely.
It feels so hard to understand why it took almost 10000 years for two human brains to make a spark and come up with the concept of evolution. How did smart and curious people who tended to animals for a living and who knew about the intricacies of artificial breeding not see the slightly unintuitive but nontheless simple implications of what they were doing there?
Was it seriously just the fault of the all-purpose curiosity stopper superstition, or was it some other deeply ingrained human bias? It’s unbelievable how long no one realized what life actually is all about. And then all of a sudden two people caught the right spark at the same point in history independently of each other. So apparently biologists needed to be impacted by many vital ideas (geological time, economics) to come up with something, that a really sharp and observant person could have realistically figured out 10000 years earlier.
And who knows, maybe some people thought of it much earlier and left no trace due to illiteracy or fear of losing their social status or even their life. Come to think of it, most people in most places during most of the past would have gotten their brilliant head on a stick if they actually voiced the unthinkable truth and dared to deflate the everneedy morbidly obese ego of homo sapiens sapiens.
Just because you aren’t aware of it, doesn’t mean it didn’t happen : )
Back when I was a teenager, I distinctly remember wondering about how one would go about calculating the distance traveled by a constantly accelerating object during a given period of time. Of course, life—as it is want to do—quickly distracted me, and I didn’t think about the problem again until years later when I learnt about integration and thought to myself “Oh, so that’s how you’d do it!” Now, I don’t think I would be able to regenerate interal calculus all on my own,but I know I’m at least observant enough to notice that something was missing—or at least I was when I was 15 -- and I think that that’s an important first step; the answers that we find are strictly limited by the questions that we ask As a side note, my cousin was, at the age of 6, able to derive multiplication from addition all on his own. He is made of win.
http://harvardmagazine.com/2012/03/twilight-of-the-lecture
Sometimes it’s good to learn things by rote, too, as long as you understand it later. For example, while I was reading the Intuitive guide to Bayesian Reasoning, I sometimes wished that there was something that I could memorize, instead of having to understand the concept, and then fiqure out how to apply it, and then understand what the answer meant.
I agree, although I sense there’s some disagreement on the meaning of “learning by rote”.
Learning by rote can be tactical move in a larger strategy. In introductory rhetoric, I wasn’t retaining much from the lectures until I sat down to memorize the lists of tropes and figures of speech. After that, every time the lectures mentioned a trope or other, even just in passing, the whole lesson stuck better.
Rote memorization prepares an array of “hooks” for lessons to attach to.
This is not too different from what I did as a teenager in school. I separated out the facts as “axioms” and “theorems”, noting that the theorems can be deduced from the axioms, should I forget them. I would try to figure out how to deduce the “redundant” theorems from my axioms, which would help me remember them. As a simple example, the Law of Conservation of Momentum, is redundant and easily derived from “every force has an equal and opposite force”—simply multiply by time. Naturally, I also immediately deduced a conservation law for center of mass—multiply by time again. I also noted places where two facts are redundant, but I couldn’t decide which was the more fundamental. Mostly I did this because I know that my memory for boring disconnected facts is rather poor—it “shouldn’t” be easier for me to remember how to derive a fact than the fact itself, but often it is anyways.
The idea of a concept having or being a “source” seems odd to me. There are many ways of looking at the same concept or idea; oftentimes, the key to finding a new path is viewing an idea in a different way and seeing how it “pours”, as you put it. The problem as I see it is that there are often many ways of deriving any particular idea, and no discernible reason to call any particular derivation the source. I find that my mind seems to work like a highly interconnected network, and deriving something is kind of like solving a system of equations, so that many missing pieces can be regenerated using the remaining pieces. My mind seems less like an ordered hierarchy and more like a graph in which ideas/concepts are often not individual nodes but instead highly connected subgraphs within the larger graph, such that there is the potential for vast overlap between concepts, no obvious ordering, and no obvious way to know when you truly “contain” all of some concept. I do understand that, at least for math, ability to derive something is a good measure for some level of understanding, but even within math there are many deep theorems or concepts that I hardly believe that I truly understand until I have analyzed (even if only briefly in my head) examples in which the theorem applies and (often more importantly, imo) examples in which the theorem does not apply. Even then, a new theorem or novel way of looking at it may enhance my understanding of the concept even further. The more I learn about the math, the more connections I make between different and even seemingly disparate topics. I don’t see how to differentiate between 1) “containing” a thought and new connections “changing” it and 2) gaining new connections such that you contain more of the “source” for the thought.
Just my two cents.
I’m inclined to agree.
This comes up again in ways that I care more about in the Metaethics Sequence, where much is made of the distinction between normal (“instrumental”) values and the so-called “terminal” values that are presumed to be their source. In both cases it seems to me that a directional tree is being superimposed on what’s actually a nondirectional network, and the sense of directionality is an illusion born of limited perspective.
That said, I’m not sure it makes much difference in practical terms.
I haven’t read any of that yet, but it sounds interesting. I’m commenting on articles as I read them, going through the sequences as they are listed on the sequences page.
I think it makes a practical difference in actually understanding when you understand something. The practical advice given is to “contain” the “source” for each thought. The trouble is that I don’t see how to understand when such a thing occurs, so the practical advice doesn’t mean much to me. I don’t see how to apply the advice given, but if I could I most definitely would, because I wish to understand everything I know. In part, writing my post was an attempt to make clear to myself why I didn’t understand what was being said. I’m still kind of hoping I’m missing something important, because it would be awesome to have a better process for understanding what I understand.
I expect that in practice, the advice to “contain the source for each thought” can be generalized into the advice to understand various paths to derive that thought and understand what those paths depend on, even if we discard the idea that there’s some uniquely specifiable “source”.
Which is why I’m not sure it makes much difference.
That said, I may not be the best guy to talk about this, as I’m not especially sympathetic to this whole “Truly Part of You” line of reasoning in the first place (as I think I mentioned in a comment somewhere in this sequence of posts a few years ago, back when I was reading through the sequences and commenting on articles as I went along, so you may come across it in your readings).
Hmm, perhaps I was reading too much into it, then. I already do that part, largely because I hate memorization and can fairly easily retain facts when they are within a conceptual framework.
It’s intuitive that better understanding some concept or idea leads to better updating as well as better ability to see alternative routes involving the idea, but it seemed like there was something more being implied; it seemed like there he was making a special point of some plateau or milestone for “containment” of an idea, and I didn’t understand what that meant. But, as I said, I was probably reading too much into it. Thanks, this was a pleasant discussion :)
“Could I regenerate this knowledge if it were somehow deleted from my mind?”
Epistemologically, that’s my biggest problem with religion-as-morality, along with using anything else that qualifies as “fiction” as a primary source of philosophy. One of my early heuristic tests to determine if a given religious individual is within reach of reason is to ask them how they think they’d be able to recreate their religion if they’d never received education/indoctrination in that religion (makes a nice lead-in to “do people who’ve never heard of your religion go to hell?” as well). The possibles will at least TRY to imply that gods are directly inferable from reality (though Intelligent Design is not a positive step, at least it shows they think reality is real); the lost causes give a supernatural solution (“Insert-God-Here wouldn’t allow that to happen! Or if He did, He’d just make more holy books!”).
If such a person’s justification for morality is subjective and they just don’t care that no part of it is even conceivably objective… what does that say for the relationship of any of their moral conclusions to reality?
I think that is why biology students like to dissect animals. Our relatives think it gross, but when you see with your own eyes that a body consists of organs and you trace the links between them, it feels so great...
A bit off-topic:
“A wheel that can be turned though nothing turns with it, is not part of the mechanism”—what about a gyroscope wheel that is a part of a stabilizing mechanism?
This comes to show, IMHO, two things: one should be extremely careful with one’s intuitions and examples must not be taken too far.
If you really wanted to nitpick, you could also point out non-driven wheels on cars. Such a wheel doesn’t turn anything useful itself (it is turned but does not turn anything), but it still successfully prevents one end of a vehicle from dragging on the ground, which is its actual purpose. But we’re merely amusing ourselves with literalistic counter-examples at this point, as I see you are well aware.
“What I cannot create, I do not understand.”—Richard Feynman.
This feels very important.
Suppose that something *was* deleted. What was it? What am I failing to notice?
Maybe learning to ‘regenerate’ the knowledge that I currently possess is going to help me ‘regenerate’ the knowledge that ‘was deleted’.
Once i had a dispute, i told that in world with internet you don’t need to know and remember facts or principles because you can just google, my opponent told that with this method you don’t have general picture and understanding in your head. Now i understand that i was wrong.
@Eliezer, some interesting points in the article, I will criticize what frustrated me:
> If you see a beaver chewing a log, then you know what this thing-that-chews-through-logs looks like,
> and you will be able to recognize it on future occasions whether it is called a “beaver” or not.
> But if you acquire your beliefs about beavers by someone else telling you facts about “beavers,”
> you may not be able to recognize a beaver when you see one.
Things do not have intrinsic meaning, rather meaning is an emergent property of
things in relation to each other: for a brain, an image of a beaver and the sound
”beaver” are just meaningless patterns of electrical signals.
Through experiencing reality the brain learns to associate patterns based on similarity, co-occurence and so on, and labels these clusters with handles in order to communicate. ’Meaning’ is the entire cluster itself, which itself bears meaning in relation to other clusters.
If you try to single out a node off the cluster, you soon find that it loses all meaning and
reverts back to meaningless noise.
> G1071(G1072, G1073)
Maybe the above does not seem dumb now? experiencing reality is basically entering and updating relationships that eventually make sense as a whole in a system.
I feel there is a huge difference in our models of reality:
In my model everything is self-referential, just one big graph where nodes barely exist (only aliases for the whole graph itself). There is no ground to knowledge, nothing ultimate. The only thing we have
is this self-referential map, from which we infer a non-phenomenological territory.
You seem to think the territory contains beavers, I claim beavers exist only in the map, as a block arbitrarily carved out of our phenomenological experience by our brain, as if it were the only way to carve a concept out of experience and not one of infinitely many valid ways (e.g. considering the beaver and the air around and not have a concept for just a beaver with no air), and as if only part experience could be considered without being impacted by the whole of experience (i.e. there is no living beaver without air).
This view is very influenced by emptiness by the way.