Some rationality tweets
Will Newsome has suggested that I repost my tweets to LessWrong. With some trepidation, and after going through my tweets and categorizing them, I picked the ones that seemed the most rationality-oriented. I held some in reserve to keep the post short; those could be posted later in a separate post or in the comments here. I’d be happy to expand on anything here that requires clarity.
Epistemology
Test your hypothesis on simple cases.
Forming your own opinion is no more necessary than building your own furniture.
The map is not the territory.
Thoughts about useless things are not necessarily useless thoughts.
One of the successes of the Enlightenment is the distinction between beliefs and preferences.
One of the failures of the Enlightenment is the failure to distinguish whether this distinction is a belief or a preference.
Not all entities comply with attempts to reason formally about them. For instance, a human who feels insulted may bite you.
Group Epistemology
The best people enter fields that accurately measure their quality. Fields that measure quality poorly attract low quality.
It is not unvirtuous to say that a set is nonempty without having any members of the set in mind.
If one person makes multiple claims, this introduces a positive correlation between the claims.
We seek a model of reality that is accurate even at the expense of flattery.
It is no kindness to call someone a rationalist when they are not.
Aumann-inspired agreement practices may be cargo cult Bayesianism.
Godwin’s Law is not really one of the rules of inference.
Science before the mid-20th century was too small to look like a target.
If scholars fail to notice the common sources of their inductive biases, bias will accumulate when they talk to each other.
Some fields, e.g. behaviorism, address this problem by identifying sources of inductive bias and forbidding their use.
Some fields avoid the accumulation of bias by uncritically accepting the biases of the founder. Adherents reason from there.
If thinking about interesting things is addictive, then there’s a pressure to ignore the existence of interesting things.
Growth in a scientific field brings with it insularity, because internal progress measures scale faster than external measures.
Learning
It’s really worthwhile to set up a good study environment. Table, chair, quiet, no computers.
In emergencies, it may be necessary for others to forcibly accelerate your learning.
There’s a difference between learning a skill and learning a skill while remaining human. You need to decide which you want.
It is better to hold the sword loosely than tightly. This principle also applies to the mind.
Skills are packaged into disciplines because of correlated supply and correlated demand.
Have a high discount rate for learning and a low discount rate for knowing.
“What would so-and-so do?” means “try using some of so-and-so’s heuristics that you don’t endorse in general.”
Train hard and improve your skills, or stop training and forget your skills. Training just enough to maintain your level is the worst idea.
Gaining knowledge is almost always good, but one must be wary of learning skills.
Instrumental Rationality
As soon as you notice a pattern in your work, automate it. I sped up my book-writing with code I should’ve written weeks ago.
Your past and future decisions are part of your environment.
Optimization by proxy is worse than optimization for your true goal, but usually better than no optimization.
Some tasks are costly to resume because of mental mode switching. Maximize the cost of exiting these tasks.
Other tasks are easy to resume. Minimize external costs of resuming these tasks, e.g. by leaving software running.
First eat the low-hanging fruit. Then eat all of the fruit. Then eat the tree.
Who are the masters of forgetting? Can we learn to forget quickly and deliberately? Can we just forget our vices?
What sorts of cultures will endorse causal decision theory?
Big agents can be more coherent than small agents, because they have more resources to spend on coherence.
- 3 Jan 2011 21:57 UTC; 13 points) 's comment on Rationality Quotes: January 2011 by (
- 1 Jan 2011 1:42 UTC; 3 points) 's comment on Question about learning from people you disagree with. by (
Doesn’t this depend somewhat on the relevance of the skill to the goal? My skills at cooking are reasonably adequate to the environment in which I live. I would classify them as better than average, but decidedly amateur. I don’t particularly want to prioritise them over my skill at playing a musical instrument, for which I get paid, but I wouldn’t like to lose too much of what cooking skills I do have as that would make my life more inconvenient, and certainly less enjoyable.
Within my work, musicianship can be broken down into a good many skills, all of which I need to maintain at their current levels to remain in employment, and some of which it may be worth my time and effort to improve. For my church job, if I were to try to improve my organ pedal technique at the expense of maintaining my ability to learn five hymns and two voluntaries per service to reasonable performance standard, I would not hold my position long. There is a limit to how much time I can spend practising each week, and so the pedal technique, while important, will progress more slowly than if I did not have to maintain my skill at playing manuals-only (as I am also a pianist this comes much more easily to me). When my pedal technique catches up with my manual technique I will be able to re-destribute my practice time to attempt improvement of both.
I’m having a hard time thinking of situations where there are no skills which are best kept at maintenance level, though it may be that for some people such extreme specialisation is efficient.
There’s a certain style distinct to many didactic quotes: they express claims in a wise-sounding but opaque way, so that they automatically appear deep without requiring the reader to actually think about them. This can cloak empty language and doubtful claims in a veneer of impressiveness—not to mention being uncommunicative if the ideas really are good.
It looks to me like these match that style. The ideas here could be both true and interesting, but making them into aphorisms (to fit Twitter) removes the explanation and examples that would convince me they’re true and interesting. As it is, they sound meaningless to me—the medium totally obscures the message.
I’d be interested in a post exploring some of these ideas, but tweets seem to me to be a format utterly unsuited to the topic.
[Also, I really think that this should not be on the front page. If even commenters have to puzzle over many of these, it’s not a good choice for the general audience.]
The real problem with pithy quotes is that if you disagree with the quote, it’s hard to argue without appearing unpithy.
(How was that for a pithy quote?)
Mostly, I think of pithy quotes as the conversational equivalent of icons in GUIs. If you don’t already have a pretty good clue what an icon does, the icon by itself isn’t very helpful… but once you become familiar with it, it can be very helpful.
Similarly, the nice thing about pithy quotes is that once you’ve understood the associated thought, they provide an easier way to bring that thought to mind on demand.
They can also provide a hook. That is, tossing a pithy quote into a conversation and providing additional explanation if there seems to be interest in it can be more comfortable than trying to toss a large chunk of exposition into conversation. (Well, for me, anyway. Some people seem more comfortable with tossing large chunks of exposition into conversations.)
All of which is to say, I’m fond of them.
I agree with you. My dispute is that ‘pithy’ means not ‘short’ but ‘concisely meaningful.’ If a line is short but confusing, it’s not pithy.
Michael Vassar has this quote on Twitter: “Every distinction wants to become the distinction between good and evil.” Which I’m sure I would have understood differently had I not previously read the post from which it (I believe) originated:
Your wariness of good pithy quotes makes sense, they are intended to be infectious memes. Just as our bodies need lotsa bacteria in order to function, infectious memes increase the carrying and staying power of ideas in our mind. Just as our bodies are screwed if we get the wrong bacteria, our thinking can be ill with the wrong memes.
For myself, I take the fact that I like a lot of these quotes as motivation to learn more about the ones I don’t understand. I am weak on the difference between beliefs and preferences, yet I’m sure it is on the edge of a very active part of my current thinking. I suspect the “high discount rate for learning, low for knowing” is backwards but I need to work it through to decide. Etc.
Pithy quotes like this increase the likelihood that I will follow through on learning more. Hopefully this is an effective way to protect against “bad” pithy quotes getting a neuron-hold on my brain.
I would say rather, “Test your hypotheses first on simple cases.” If they can be quickly disproven there, you can move on to more useful hypotheses.
I don’t think this necessarily applies to the arts. Or are you just saying that fields that measure quality poorly will attract all sorts of people?
Also, Goodhart’s Law applies—any field with high rewards will attract people who will try to modify the reward system in their favor.
Should that be “ignore the existence of uninteresting things”?
No; people are often discouraged from trying addictive things.
Ahh, I originally read this as evolutionary pressure, which I think goes in the opposite direction in this case. Dependence on some external stimulus can be adaptive if that stimulus is readily available and has otherwise beneficial effects. It’d be interesting to get a look at some of the game theoretic dynamics of intelligence, novel thinking, and so on—while you would expect an evolutionary pressure to entertain (at least some limited number of) novel ideas, it seems there may also be evolutionary pressure to discourage others from doing so (to maintain social cohesion, i.e., make your neighbours more predictable).
I question this best people tweet also. Some great people become public school teachers.
Some fields are simply harder to measure than others. I have previously known a physicist who DEFINED any question that he couldn’t address using the scientific method as “uninteresting.” Included in his list of uninteresting questions were things like what is human consciousness. It may be more valuable to search for excellence in fields where the metrics don’t work in a trivially transparent way since the fields where it does work will be adequately investigated by following the crowd.
The arts are an excellent counter-example disproving the claim that able people avoid professions where the measure of product quality is poor. But there are examples within the sciences, too. The summaries I’ve seen (as well as folk knowledge) indicate that chemists are substantially less intelligent than psychologists. (Physicists, however, are smarter than psychologists.)
I think the mistaken assumption stems from the egocentric fallacy: we exaggerate how similar others are to ourselves. Recognition is only one of various motives in choosing a field, and let’s not overlook the obvious, interest in the subject matter.
“Best people” is vague. “More intelligent” might not be equivalent to “best”.
Still, I’m intrigued by psychologists turning out to be more intelligent (higher IQ?) than chemists. Maybe chemists tend towards very strong spacial and mathematical intelligence, but are definitely not as good on verbal, while psychologists are very strong on verbal but not comparably weak on spacial/mathematical. This is just a guess, though.
I want to know what this code was for.
Converting Go positions from SGF to LaTeX.
You don’t need to “train” to maintain your skills, using them will maintain them. If you don’t need particular skills for a longer time, they will gradually deteriorate. I rarely use all the math skills I have picked up, so I periodically do a fairly intense refresher on things I haven’t been using, then let it slide until it is needed, or I think I have forgotten enough to do another refresher.
I like this one. It works equally well against people who tend to eat the tree first and look down on fruit-eaters later, and against people who eat the low-hanging fruit and sit down, contented.
I, on the other hand, dislike it. Low hanging fruit is a useful and fairly accurate metaphor. This is taking the metaphor and torturing it. It sounds witty, but I don’t think it really gets across anything important, nor easily.
Don’t look now, but the rest of the comments on the grandparent are also torturing the poor thing. Thankfully, metaphors don’t have moral significance.
Why eat the tree at all? A live tree will continue to bear fruit. The quote seems to promote immediate use of the closest available resources over patience with higher payoff.
Metaphorical trees are possible to swallow whole, and will thrive in the environment of the metaphorical human stomach, so eating the tree means automatic benefits from all future fruit.
You have tortured this metaphor so hard that you have passed infinite negative utils and come back out on the positive infinity side.
The human brain represents utilons using a fixed-length integer field?
No, but this metaphor has utility occupying a discrete subset of the projective real line isomorphic to a cyclic group.
You win. :-)
I think the only real way to WIN is for us to torture this metaphor-utility metaphor until IT comes back out on the positive infinite side.
...0?
Very good!
What is this trying to say?
Reap the easy rewards first, but don’t stop there.
The incorrect view that high difficulty entails high reward (and conversely low difficulty entails low reward), when in reality reward and difficulty are not strongly correlated.
As per Peter’s comment below.
Excellent list. Please could you expand on/clarify these 2, I’m not sure what you mean and why:
Science before the mid-20th century was too small to look like a target.
There’s a difference between learning a skill and learning a skill while remaining human. You need to decide which you want.
Do you have preconceptions about what thinking should feel like? Do you want your instantiation of the skill to be more similar to other parts of your mind, or more similar to other instantiations of the skill? To get really good at something, it helps to remove constraints about how you achieve it.
Basically what JoshuaZ said, although science can also be a victim of convenience rather than just ideology. Using science for status gain without adhering to its rules also counts as targeting it, e.g. plagiarism.
I don’t really mean that science never looked like a target before the mid-20th, but it’s definitely much more of a target now.
That makes sense, thanks.
In this case I interpreted it to mean that there are many people now who specifically target science as bad (i.e. complain about “scientism” etc.) because science is a large, successful enterprise. He is asserting that before the mid 20th century science was not prominent or successful enough to bother being a target of attack. I’m not sure he’s correct here, but the basic notion is plausible.
Ah, I see. I was thinking “small target in a large search space” kind of target. Thanks.
Ah, I see. I was thinking “small target in a large search space” kind of target.
Learning how to transplant a kidney is much easier when you have a few dozen people to experiment on. (I think that was the idea, anyways...)
I thought it referred to training at one thing so hard that you become incapable of doing anything else. Like the programmer who forgets to shower, eat, sleep, etc.
But the tweet is directed at the reader. Surely Peter didn’t expect many of his readers to be faced with that sort of decision while learning their skills?
But the tweet is directed at the reader. Surely Peter_de_Blanc didn’t expect many of his readers to be faced with that sort of decision while learning their skills?
And because of correlated and pre-requisite learning—skills, like other knowledge, builds on previously learned skills.
Yes. Coherence, and persuasiveness.
The individual that argues against whatever political lobby is quick to point out that the lobby gets its way not because it is right, but rather because it has reason to scream louder than the dispersed masses who would oppose it. But indeed, the very arguments the lobby crafts are likely to be more compelling to the masses, because it has the resources to make them so.
The lobby screams louder and better than smaller agents, as far as convincing people goes.
Some of these are very good, others a little bit less so. Granted, they come from a Twitter feed and are therefore spur of the moment, but I’m going to point out a few I disagree with.
I’m not sure this is always true. Ideally we should test in simple cases, but sometimes ruling out strange stuff requires using complicated cases. I’d prefer something like “Test your hypothesis on the simplest cases necessary to make a useful test.”
I strongly disagree with this. At minimum, one needs to form opinions about which opinions to trust. If you are a good rationalist you can often trust within some confidence the consensus of experts in a field when you have no other data. But not forming opinions easily can let you get taken in by the wrong experts. And not having any opinions will paralyze you.
This depends heavily on the definition of “useless”.
I like these two a lot, and I’m going to steal them and use them. I’m not sure the second is completely accurate, certainly most major Enlightenment figures who thought about these issues would argue strongly that this distinction is not a preference.
This one is very funny and another one I’m stealing. But I’m not sure there’s a substantial point here.
Edit and one more:
I’m not sure I agree with this. If I have multiple trees it might be better to get all the low-hanging fruit before I move on to the higher level fruit in any tree. This is especially true because easy results in related fields can help us understand other fields better and help us in our fruit plucking on the nearby trees. Focusing on a single tree until all fruit has been removed is generally not doable in almost any field.
Learning math sure isn’t useless, and it seems to mostly consist of thinking about useless or nonexistent things.
Possible. I didn’t check the literature before posting that tweet. Anyway I think both encodings are possible to some extent. “You can’t derive ought from is” is a belief. “People should distinguish between beliefs and preferences” is a preference.
This refers to a possible difficulty of introspection.
I learned a lot of math (undergraduate major), and while it entertained me, it has been almost completely useless in my life. And the forms of math I believe to be most useful and wish I’d learned instead (statistics) are useful because they are so directly applicable to the real world.
What useful math have you learned that doesn’t involve reference to useful or existent things?
I’ve hypothesised before that learning math might be useful because a) you get lots of practice in understanding abstraction and how abstract objects can meaningfully be manipulated using rules, and b) you hopefully learn that proofs are nobody’s opinion. So basically a lot of practice in using basic logic. Neither of which require study of useful or existing things.
Though obviously it would be preferable if the actual content were about useful stuff as well, to get double the benefit, it’s not inherently useless.
Real analysis is the first thing that comes to mind. Linear algebra is the second thing.
Lately I’ve been thinking about if and how learning math can improve one’s thinking in seemingly unrelated areas. I should be able to report on my findings in a year or two.
This seems like a classic example of the standard fallacious defense of undirected research (that it might and sometimes does create serendipitous results)?
Yes, learning something useless/nonexistent might help you learn useful things about stuff that exists, but it seems awfully implausible that it helps you learn more useful things about existence than studying the useful and the existing. Doing the latter will also improve your thinking in seemingly unrelated areas...while having the benefit of not being useless.
If instead of learning the clever tricks of combinatorics as an undergraduate, I had learned useful math like statistics or algorithms, I think I would have had just as much mental exercise benefit and gotten a lot more value.
I first learned calculus using infinitesimals.
The thought “I will not purchase this useless thing” is a thought about a useless thing, and it is not a useless thought. His formulation (“not necessarily”) means that technically, it doesn’t depend on the definition (given that you accept the previous example, of course).
I actually parsed that quote as “Eat all the low-hanging fruit (in the orchard). Then eat all the fruit (in the orchard). Then eat the tree(s).” Well, not specifically thinking orchard, but I imagined running along a row of trees plucking all the low-hanging fruit, then returning for all the fruit, then shrugging and uprooting the trees.
I like this one.
My observations suggest that many (maybe even most) people will ignore even pretty substantial inaccuracies for even a little flattery.
I think this tweet was overwhelmingly normative rather than descriptive
Then it should be rephrased as ‘We should seek a model of reality that is accurate even at the expense of flattery.’
Ambiguous phrasings facilitate only confusion.
I read it as declarative—We (at spaceandgames) seek a model etc etc. Peter isn’t the only person on that twitter account.
How did you figure that out?
I misunderstood how retweeting works.
What do those two mean?
You reinforce your biases, which makes it harder to resume improvement later. Also, you get comfortable with being at a fixed level.
Knowledge tends to come in small chunks. If a chunk turns out to be wrong, you can discard it. Skills are harder to discard, and they’re always at least somewhat wrong.
If it’s ever more efficient to maintain proficiency by occasional practice (as it is with memory of facts—c.f. spaced reptition), than it is to forget but later quickly re-learn, then that seems worth the minor risk you’re afraid of. Especially if the skill is one that’s useful to have ready, or to regular employ.
Also, there’s usually not a good reason to stop at a particular level of skill. If the skill is worth having at all, it’s probably going to be worth having at the highest level you can achieve, and if it’s not worth having, continuing to train to keep the same level at it is probably a manifestation of the sunk cost fallacy.
Cute. But one of the failures of this ontology is the failure to distinguish between assumptions and assertions. And one of my assertions is that the distinction between beliefs and preferences is an assumption.
Can you go into more detail about this? I’m not sure what you mean.
The idea is not completely worked out. But the basic idea is that I distinguish between statements used as assertions and statements used as assumptions. (The division of statements into beliefs and preferences is orthogonal.)
When a statement is used as an assertion, the meaning of the statement flows from the relationship between the statement and the actual world. The truth of an assertion depends upon evidence. An assertion “pays the rent in anticipated experience”.
When a statement is used as an assumption, the meaning of the statement flows from the way that the assumption changes the way the world is talked about. The presence or absence of an assumption changes the meaning of statements—it may even change a statement from meaningless to meaningful. An assumption “pays the rent” by providing new ways of thinking about the world.
As I understand it, when we choose MWI over Copenhagen, we are making an assumption, not stating an assertion which may (depending on the evidence) be true or false about the world.
So, given this framework, I claim that the question of whether beliefs must be distinguished from preferences is not something to be decided by evidence or reasoning. Instead, we make that distinction as an assumption. Having made that assumption, we can then go on to make assertions or form analogies that flow from this assumption. The assumption enables speaking and reasoning. And we can make the “meta-assertion” that the distinction itself should be classified as an assumption.
I like it. It wasn’t what I was expecting when you used the word “assumption”, though; I usually think of assumptions as being assertions which are for whatever reason not questioned.
Maybe consider using another word, like “lens”, or “framework”, something that emphasizes their nature as support/modification for assertions.
Premise? That is what I got from
“1.The best people enter fields that accurately measure their quality. Fields that measure quality poorly attract low quality.”
I would edit it thus:
A- “Fields that measure quality poorly retain low quality, and repel The Best People.” (The Best People will get fed up and leave)
and a related:
B- “People of varying quality enter fields that reflect a lot of different things, low on the list of which is how that field measures quality. High on the list would be how that field is compensated.” (in all the variations of “compensation”: monetary, social status, secondary characteristics, ease of entry, ease of continuation, ease of exit strategy, duration to exit, do the chicks/men dig it, do the parents approve, etc....). There is a huge incentive to enter a field that measure quality poorly, but compensates the entrant well.
(this is somewhat related to the “public school teachers” comment already made, but I thought that reply veered into something else)
Some of these are so Peter de Blanc-ish. (Epistimology 7, Learning 7, etc.)
I suspect a ‘forgotten’ vice would regrow, and need to be rediscovered.
Depends on how much newly learned things have changed the rest of one’s behavior.
I like that you’re being terse. Many of these are puzzles—I need to discover a way to interpret them that allows me to like them.
Of these, I’m unsure:
Huh? It’s clearly a belief (part of a map). That seems easily grasped once the question is posed. The early Enlightenment wasn’t meta- enough for your liking, for not posing it?
You’re implying that the insularity is a result of researchers gravitating toward areas where they’re rewarded by other researchers more so than something that directly helps outsiders? I don’t understand “scale faster”.
In this metaphor, are learning and knowing investments that will return future cash? Why should there be different discount rates? Do you think the utility of learning something is more uncertain than the utility of knowing something? I don’t understand how to act on this advice at all. Are you saying I should try hard not to forget what I already know, instead of learning new things?
What’s meant is clear; the reason for it isn’t.
Do you mean precommitment (burning the ships), or do you mean just to be aware of the cost of such a decision?
No. We can do neither. I can direct my attention slightly, and them maybe I’ll forget.
Endorse? You mean, use it somehow?
By learning, I mean gaining knowledge. Humans can receive enjoyment both from having stuff and from gaining stuff, and knowledge is not an exception.
It’s true that a dynamically-consistent agent can’t have different discount rates for different terminal values, but bounded rationalists might talk about instrumental values using the same sort of math they use for terminal values. In that context it makes sense to use different discount rates for different sorts of good.
Writing the above comment got me thinking about agents having different discount rates for different sorts of goods. Could the appearance of hyperbolic discounting come from a mixture of different rates of exponential discounting?
I remembered that the same sort of question comes up in the study of radioisotope decay. A quick google search turned up this blog, which says that if you assume a maximum-entropy mixture of decay rates (constrained by a particular mean energy), you get hyperbolic decay of the mixture. This is exactly the answer I was looking for.
I disagree strongly with the second part of that; partially because there is no sharp boundary between knowledge and skills. “Skills” are just how-to knowledge that you have practiced using enough that you can do it without having to look up the details as you go.
I vote for a LW wiki page for this list, where for each quote, people can append links to LW posts relevent to that tweet.
[Edit: forget this bit] Then when we see which quotes don’t have any links, I vote for a vote on which of those quotes most deserves a new LW post.
Or, instead of voting, we could just write posts.
Wow! Many of these are worth a new LW post on their own.
Please could you expand on/clarify these 2, I’m not sure what you mean and why:
Science before the mid-20th century was too small to look like a target.
There’s a difference between learning a skill and learning a skill while remaining human. You need to decide which you want.
“Not all entities comply with attempts to reason formally about them. For instance, a human who feels insulted may bite you.”
Haha..Anybody else remember Rational!Harry?