Improving Enjoyment and Retention Reading Technical Literature
A little background on myself first – I am currently studying to become involved with aging rejuvenation therapies like SENS.
This requires learning quite a lot about molecular biology. Which is fine, because I find cell biology quite interesting. The problem is naturally that textbooks and technical literature on the subject often make very little effort to be interesting.
Many of the books I’ve been reading lately are largely lacking in energy. And I was finding my mind was often drifting away from what I was reading and generally just not enjoying the process. Which was bad, because I need to spend a lot of time doing it.
I asked myself, do I hate learning about molecular biology and engineering? Should I shift my goals to something I’m more interested in? But, I didn’t seem to actually be disinterested in the subject. I loved talking about what I’d learned. And I frequently thought about it with interest. I was passionate about the goal of defeating aging. So the problem then was probably the books themselves.
So then the question was: how do I make boringly written biology books fun to read? Find better books? Well unfortunately, based on my research, the only biology books written to be interesting tend to be focusing on on other sects of the science; most good molecular biology books are boring. If anyone knows of any books on the subject that are unusually well written, please let me know. But I couldn’t find any.
So I looked for the bright spots: where reading was fun. What makes reading a novel fun? I asked. Interesting story, character interactions, suspense, humor, dramatic scenes.
None of these are incorporated in molecular biology books and publications that I can find. But the answer was still there: visualize what I read. But not just visualize like the little diagrams of cellular interactions books usually give you – like stupid, over-the-top, Hollywood-status visualization. I had to make it dramatic. I had to mentally reconstruct the biology of a cell in massive, fast, and explosive terms.
Suddenly, I was reading about genetic engineering with a grin on my face; because I was visualizing a cackling mad scientist taking a jackhammer to a gene sequence.
Which, yes, is totally not what is happening in any way, but I remember what I read better because the unusual things are what stick in human memories; just reading a passage normally makes it easy to forget what I’ve read. And the weirdness seems to make the parts around it more memorable, so I’m remembering what I read a lot better, I find.
Most of the time I try not to make it that absurd. But if I imagine spliceosomes blasting introns out of RNA molecules or cell lysis as an overstated explosion of a cell I simply remember the concepts better. It isn’t the most accurate view of reality, but I’m aware of that when I think back on it, and it’s better than not remembering it.
But this strategy eventually gets a little tiring to maintain alone, I find, so I had to add in a second technique. Every time my mind wants to start wandering I stop, close my eyes, and refocus on what I’m reading, I recite ‘Tsuyoku Naritai’, and why I want to become stronger, what I have to protect. And then I continue. I find this little technique to make a massive difference. It reorients me, so that I continue to concentrate and it briefly reminds of what I’m pursuing and why. And if that doesn’t give you the motivation to continue you should probably find a different project.
A third useful strategy has been planning how long I will read instead of how much and then break up the time spent reading over the course of a day. First, it encourages reading to understand fully rather than reading to finish fifty pages. Also, I find it tends to get me to read more pages, despite defeating the motivation to go fast. Time goals just take the pressure of failing to complete work off a bit, I find. As an example, I read about a 160 pages of a molecular biology textbook today using an input-based time goal. I used to plan for fifty pages of a similar type of material on a regular day and sometimes not finish even that. To be fair, I’m spending more time reading now, but I think using input based goals instead of output goals had a part in that that.
The other results I’ve gotten from these strategies have been pretty good as well. I’ve been trying to quantify my happiness lately, on a scale where every full number corresponds to doubled enjoyment, and now that I’m doing these three things my average happiness while reading technical passages has gone up by nearly a full point. My enjoyment of technical literature has gone from somewhere around ‘yeah, it’s ok, I guess’ to ‘happy’ while reading. And because it’s just more fun to do, it helps me to spend more time reading about molecular biology, more time working towards an unaging future.
Anyway, I thought I’d post the ideas in case they helped anyone else out (although the first might not work as well for things that are harder to visualize). I’m also interested if anyone does anything similar (or different) to increase their enjoyment of similar texts.
- Reading habits/techniques/strategies (second post on the topic) by 29 Sep 2013 12:55 UTC; 28 points) (
- 28 Apr 2015 4:19 UTC; 24 points) 's comment on Learning Optimization by (
- 15 Aug 2013 16:19 UTC; 5 points) 's comment on Open thread, August 12-18, 2013 by (
- 13 Sep 2013 23:47 UTC; 2 points) 's comment on Please share your reading habits/techniques/strategies by (
I’m having the same problem with molecular biology right now, and I agree with the track you’re taking. The issue seems to be the large amount of structure totally devoid of any semantic cues. For example, a typical textbook paragraph might read:
JS-154 is one of five metabolic products of netamine; however, the enzyme that produces it is unknown. It is manufactured in cells in the far rostral region of of the cerebrum, but after binding with a leukocynoid it takes a role in maintaining the blood-brain barrier—in particular guiding the movements of lipid molecules.
I find I can read paragraphs like this five or six times, write them on flashcards, enter them into Anki, and my brain still refuses to understand or remember them after weeks of trying.
On the other hand, my brain easily remembers vastly more complicated structures when they’re loaded with human-accessible meaning. For example, just by casually reading the Game of Thrones series, I know an extremely intricate web of genealogies, alliances, locations, journeys, battlesites, et cetera. Byte for byte, an average Game of Thrones reader/viewer probably has as much Game of Thrones information as a neuroscience Ph.D has molecular biology information, but getting the neuroscience info is still a thousand times harder.
Which is interesting, because it seems like it should be possible exploit isomorphisms between the two areas. For example, the hideous unmemorizable paragraph above is structurally identical to (very minor spoilers) :
Jon Snow is one of five children of Ned Stark; however, his mother is unknown. He was born in a castle in the far northern regions of Westeros, but after binding with a white wolf companion he took a role in maintaining the Wall—in particular serving as mentor to his obese friend Samwell.
This makes me wonder if it would be possible to produce a story as enjoyable as Game of Thrones which was actually isomorphic to the most important pathways in molecular biology. So that you could pick up a moderately engaging fantasy book—it wouldn’t have to be perfect—read through it in a day or two, and then it ends with “By the way, guess what, you now know everything ever discovered about carbohydrate metabolism”. And then there’s a little glossary in the back with translations about as complicated as “Jon Snow = JS-154″ or “the Wall = the blood-brain barrier”. I don’t think this could replace a traditional textbook, but it could sure as heck supplement it.
This would be very hard to do correctly, but I’d love to see someone try, so much so that it’s on my list of things to attempt myself if I ever get an unexpectedly large amount of free time.
Seems like an extension to memory palace idea, probably never attempted due to the complexity of maintaining the isomorphism at that scale.
One interesting thing to try is to find a professor of molecular biology who really knows the stuff and see what kind of mental structures they have built up to maintain it. “What does JS-154 remind you of”? Etc.
Don’t tempt your fans into getting you some jail time :)
There is an animated series for children aimed at explaining the human body which personifies bacteria, viruses, etc. Anyone interested in pursuing your idea may want to pick up techniques from the show:
Wikipedia article: http://en.wikipedia.org/wiki/Once_Upon_a_Time..._Life
Example: http://www.youtube.com/watch?v=LIyvrcHnriE&t=1m11s
Yvain posted a follow-up post, “Extreme Mnemonics”, on his own blog. Readers have posted many comments.
Awesome. I’m going to try this on something (short).
Random thoughts:
if you are describing a static system, how to represent character arcs? Can a leukocynoid become king?
there’ll be hundreds and hundreds of characters. But I suppose that’s still better than hundreds and hundreds of random meaningless pieces of jargon.
this is very like other kinds of constrained writing http://en.wikipedia.org/wiki/Constrained_writing That some of those things are even possible makes me think this is more likely than you might imagine at first glance.
I like the idea but fear that almost any subject worth learning would, when put in this format, have two major flaws:
1) Could deeply mislead people and corrupt their understandings of the actual material in difficult to understand (meaning difficult to detect and correct later) ways because of unintentional cultural relations. For example in your fantasy novel idea, whatever thoughts someone might attach to the relations between kings and knights might be deeply ingrained from childhood and quite a bit different than what is intended by the author of the new fantasy novel, meaning the reader could add in all sorts of incorrect ideas of their own.
2) Even though the intention is to learn the same amount of material (in this case molecular biology) by attaching it to a story, I think there is a good chance that you’re actually just effectively doubling the load of material. Not only do you have to memorize the story, but then memorize the connections. Imagine learning this way and revisiting in two years, do you study the story or the material? Both? You’ve just doubled the amount of information you need to complete the subject in your head.
Medieval alchemists might have had something like this:
You’ve come to an example of perhaps one of the main aspects of the future of communication. Metaphors of useful for a lot of reasons, but one of these reasons is to decrease burden on cognitive resources by drawing an isomorphism with something already understood (to cut out the time and energy necessary to explain the structure from scratch), or by introducing an isomorphism with something that’s easier to remember because of how the human mind works (such as your example, where you point out that things involving human meaning and intention are easier to remember).
People have been using metaphors for these and other reasons for a very long time, but in most cases people have been stuck with picking out of the already-existing landscape of things to draw isomorphisms from. This is difficult because these other things were not introduced into the thought landscape among the population for the purpose of being made into metaphors—they just happen to be there. But what you’re talking about is removing this limitation, and essentially creating a new structure, for the purpose of then drawing a metaphor with it. This means creating a metaphor by design, rather than having to pick one from the wild.
Take it a step further, and move outside the paradigm of being constrained to just words for explaining things—make a bit more technological—and you realize one of the most promising avenues in the future will be to create enjoyable video games with worlds of their own, and rules of their own, for the purpose of later drawing metaphors from this man-made landscape. The problem of having to draw only from already-existing things for metaphors is gone. You can design worlds for the purpose of making metaphors out of pieces of them later on.
Few things I’ve stumbled onto seem to have the same potential as this. I wouldn’t be surprised if this is literally at the center of the future of communication. Once video games are easier to make, and the average ambitious individual can work on their own video game, just as any ambitious person can write a novel, this may end up one of the most effective forms of communication—building an enjoyable virtual world for people to play through, with the ending being a re-labeling of all the components, revealing their theory on something in an extremely clear, effective, efficient manner.
How long have you been doing this for?
About a month now
I’ve enjoyed the Mindhacks tip to write in books—If you can see how to write it better, summarize it better, index it better, or organize it better, doing so is an active use of the information.
One of the main reasons that I still print the electronic copies of scholarly papers I read for my yeast metabolism projects.
I always find it easier to motivate myself to research something if there’s an immediate application. Sometimes this is an actual immediate application, such as reading an article on nutrition and thereby deciding it would be a good idea to buy a certain kind of food next time I’m at the store. But sometimes what I want to research doesn’t have such a quick, visceral effect on my action, in which case manufacturing an immediate application is what I often find necessary to motivate myself.
To be specific, what I usually do is try to write an article on the subject. This seems to signal to my brain that there really is an immediate application to this research (using the information to write an article), but of course this is more of a hack than anything else, as not everything has a practical application (demonstrating a meaningful distinction), but everything can of course be made into an article (throwing the meaningful distinction of “does this have a practical application?” to the wayside, despite my brain feeling like the answer is “yes, this action I’m taking right now (writing the article) is the practical application!”)
So yeah, I don’t know whether you’re sufficiently similar to me in this respect for this suggestion to be useful, but it does seem to be that this may be a common brain setting (feel motivation if there’s a clear practical application for acquiring the knowledge, don’t feel motivation if there isn’t), and it also seems like what you’re working on (reading highly detailed descriptions of mechanisms in molecular biology for the purpose of something far away in the future) may be just the sort of thing that one’s brain may find boring because the practical application is not enough in near mode, and thus may benefit from the hack I’ve suggested (writing articles on the topic to give your brain something sufficiently visceral to grasp onto for motivation).
This is similar to another frequently-recommended technique: Teach the material to someone else.
This definitely sounds like something that would help me feel more active with my research, I’ll have to try it, thanks!
Thanks for posting this and I wish you the best in your seemingly valuable quest.
One key point: are you using spaced repetition? The spacing and testing effects are valuable memory tools, indeed, and you would be remiss if you did not at least try it. I have some biology Anki cards here to get you started in case you are interested: http://andrewtmckenzie.com/sr/#biology. Also feel free to PM me.
The deck violates the laws of spaced repetition that Piotr Wozniak proposed.
There no reason why the word optionally should appear in this way on an Anki card. The answer to a card has to be always the same answer.
There should be one card for the full name and a different card for the one letter code.
Also there no need to write “name this amino acid”. “Fullname?” Works as well and is shorter. Short cards mean that you spend less time reading them and can process them faster.
I have other cards for which I associate the one letter code with the name, so they mean the same thing to me.
Though I disagree with your example, I do violate the “laws” of SR in lots of other ways, for example including multiple questions on a card. There is a trade-off between the time to make a card and the relative benefit of having more spread out cards. I also think that “chunking” information together in memory sequences can be useful.
Completely agree.
Can you link to some of your cards so we can get a sense of how you’ve made them?
In that case I still don’t see why you don’t decide for either the one letter code or the fullname.
Wozniaks idea is that if you use different way to get to the answer then you won’t strenghen you memory in the same way as when you always use the same way to answer a card.
That means the algortihm won’t correctly calculate the time you need to repeat the card because it doesn’t know whether which answer you gave to the card. That in turn leads to forgotten cards which is bad because you need to start again with them.
I haven’t taken care of keeping out copyrighted pictures and sorted my cards decently. There’s also a mix of languages. So at the moment I have nothing that easily publishable as a collection. Maybe I will in the future.
I’ve got Anki downloaded, but I haven’t used it yet—I’ll definitely give it a shot now. Not having to make cards before I can start studying makes getting myself to try a lot easier, thanks.
Actually probably not. Making your own cards is valuable when reading a text because it forces you to actually go into detail and investigate which information is in the text.
Anki exists to prevent forgetting information that you learned. If you try to learn cards that contain information which you don’t understand you will waste time.
Jonathan Hayward’s An Abstract Art of Memory describes a similar process of coming up with memorable images for material where you actually need to absorb new ideas, not just arbitrary information like phone numbers which traditional mnemonics are useful for.
Spaced Repetition is only the beginning. http://bigthink.com/neurobonkers/assessing-the-evidence-for-the-one-thing-you-never-get-taught-in-school-how-to-learn
Here’s How to Read—on page 2 there’s a table that lists all of what this guy recommends, use that to evaluate if the rest of the document is worth your time. http://pne.people.si.umich.edu/PDF/howtoread.pdf
Also, if you know anyone who has gone to CFAR, start PMing them for the material on Propogating Urges. http://rationality.org/schedule/
What you’re doing to make reading papers fun is apparently something that Andrew Critch is very very good at, so keep it up, Critch Jr.
I read a lot of technical material for my job (physicist) and find that trying to make it as ‘active’ as possible is key to making it enjoyable. I made a list of actions that make it active, things like ‘visualize the material’ or ‘draw a concept map’ or ‘make a prediction about the material, keep reading, and correct the prediction’ or ‘what do you already know about this material’ etc etc., and find that going down the list in a random fashion has helped make it more interesting and engaging.
How to enjoy talking about it, retain more and enjoy more: read aloud.
I don’t do this systematically, but sometimes there’s something in the material that spontaneously causes me to come up with such a mental image.
An example from some time back, when I was studying for what I think was either a Data Communications or Operating Systems course:
Correction: “tsuyoku”, not “tsukoyu”.
Whoops, I didn’t notice the typo because I expected the misspell line.
You just became stronger.
I have found that, at least for myself, once I spend a lot of time studying something (at least 3 months) in depth, it automatically becomes interesting. The key is only to push myself to get past that initial ‘inertial’ barrier in my mind.
I used a similar thing when cramming for selected topics in plant physiology. The Photosynthesis, for example, we had to draw the cycles etc. (and I realize it is way simpler, but it was only tangentially related to our specialty.) I still remember (10 years later) how...
...there was a desert across which they had to post Electrons (small hard balls). At the left border of the desert and in the small oasis in the middle of it, two outposts of very dedicated dystrophics stood. They could not move far out into the wilderness, but they had catapults which sent Electrons to the right, so that they eventually reached the right border of the desert. Aliens in blindingly golden ships threw them food packages labeled ’620′, ‘340’, etc. After they ate, the dystrophics could load The Electron into the catapult. The desert was a bit sloped, which helped.
...then my study mates told me that was insane… and Travels of The Electron were discontinued.
This post reminds me of ADHD. Here is a quote from a 2009 Washington Post article :
ADHD has long been assumed to have something to do with low dopamine. So perhaps something to raise dopamine levels would be helpful. Some people claim that taking L-tyrosine, a dopamine precursor, can raise dopamine levels and help people pay attention to things that would otherwise not hold their attention.
I have a basic question—why do you want to stuff all that technical minutae into your memory?
We now live in the age of the great amount of information available at our fingertips. If I don’t know something specific, but know the context and how to ask for what I want, in a few seconds Google will tell me. Lookup is fast and cheap, knowing how and where to find the information is almost as good as knowing the information itself.
Given this, I think the focus should be on understanding how the specific systems that you’re interested in work in general, not on the particular details. Details you can always look up when you need them as long as you understand the broad outlines and the context. If you know, say, that insulin causes the uptake of glucose from the bloodstream into the cells, you don’t need to keep in your memory the fact that this happens through GLUT-4 transporters—when you need to drill down into particulars, such information is easy to find.
In a way that’s a variation of old advice: focus on understanding, not on memorization.
Essentially, this is an issue of managing complexity. That’s a huge topic and I don’t want to delve into it here, but brute force (as in, whacking the bits with a spaced repetition hammer until they fit into your brain) is rarely a good approach.
To prevent context switching. Context switching in humans is bad.
(This is also why we teach people how to add, subtract, and multiply small numbers together.)
Context switching is, of course bad. But the size of human working memory is quite limited so for fields where you have to operate on lots of data some context switching is inevitable.
Also consider the cost, from two perspectives.
First, it used to be that if some piece of needed data wasn’t in your memory, you had to (in the best case) get up, walk to the bookshelf, take the book, page through it to find the data, and then resume. In the worst case you had to go to the library or even send for the book or the paper you needed. That’s a pretty major context switch. Nowadays, you bring up another window on the same screen you’re sitting before, type a few words, and get your answer in a couple of seconds. That’s a considerably less disruptive context switch.
Second, memory is adaptive and works like a cache. If you find yourself constantly looking up the same things, they will stick in your memory and you won’t have to look them up any more. The context-switching issues will become less prevalent as you go along. When you try to pre-memorize everything you might need, you spend a great deal of resources (time, attention, will, etc.) to populate your memory-cache beforehand. Is this a good idea? I think it depends—sometimes yes, sometimes no. You will forget what you’re not using anyway.
One reason is because of overlearning.
Overlearning is about practicing skills, not memorizing facts.
Memorizing facts, specifically, using domain knowledge to compress the facts, thus making them easier to memorize, is a way to practice skills.
Um, no. We may be using words differently, but for me memorizing facts and practicing skills are not the same thing at all.