Useful Concepts Repository
See also: Boring Advice Repository, Solved Problems Repository, Grad Student Advice Repository
I often find that my understanding of the world is strongly informed by a few key concepts. For example, I’ve repeatedly found the concept of opportunity cost to be a useful frame. My previous post on privileging the question is in some sense about the opportunity cost of paying attention to certain kinds of questions (namely that you don’t get to use that attention on other kinds of questions). Efficient charity can also be thought of in terms of the opportunity cost of donating inefficiently to charity. I’ve also found the concept of incentive structure very useful for thinking about the behavior of groups of people in aggregate (see perverse incentive).
I’d like people to use this thread to post examples of concepts they’ve found particularly useful for understanding the world. I’m personally more interested in concepts that don’t come from the Sequences, but comments describing a concept from the Sequences and explaining why you’ve found it useful may help people new to the Sequences. (“Useful” should be interpreted broadly: a concept specific to a particular field might be useful more generally as a metaphor.)
- Relationship Advice Repository by 20 Jun 2022 14:39 UTC; 106 points) (
- Repository repository by 28 Jul 2013 22:59 UTC; 67 points) (
- Useful Questions Repository by 25 Jul 2013 2:58 UTC; 40 points) (
- Bad Concepts Repository by 27 Jun 2013 3:16 UTC; 33 points) (
- Relationship Advice Repository by 20 Jun 2022 15:38 UTC; 25 points) (EA Forum;
- Learning From Less Wrong: Special Threads, and Making This Forum More Useful by 24 Sep 2014 10:59 UTC; 6 points) (EA Forum;
- 22 Jul 2016 10:02 UTC; 1 point) 's comment on [CORE] Concepts for Understanding the World by (
- 24 Sep 2014 10:23 UTC; 0 points) 's comment on Minor Updates by (EA Forum;
So many! Just a few that come to mind: compound interest, observation selection effect, bias, differential technological development, function, present value, correlation, subjective experience, trivial inconvenience, self-amplifying capacity, expectation, etc.
Also, many conceptual distinctions in philosophy are useful: analytic-synthetic, a priori-a posteriori, de re-de dicto, phenomenal-intentional, proximate-ultimate, intrinsic-instrumental, factual-normative, contingent-necessary, etc. Taxonomies of views in a given field, too, can be quite helpful in conceptualizing the relevant boundaries (e.g. Chalmers on views about consciousness or Miller on views in metaethics).
I guess one could say the concept of a useful concept is itself quite useful.
Here is a PDF for that.
Some perhaps-too-obvious ones: comparative advantage, arbitrage, Schelling point, plurality, transfer (in education), deliberate practice, tacit knowledge.
What aspects of arbitrage and plurality are you trying to emphasize?
The plurality link seems to be broken.
fixed
Most functions are not linear. This may seem too obvious to be worth mentioning, but it’s very easy to assume that various functions that appear in real life are linear, e.g. to assume that if a little of something is good, then more of it is better, or if a little of something is bad, then more of it is even worse (apparently some people use the term “linear fallacy” for something like this assumption), or conversely in either case.
Nonlinearity is responsible for local optima that aren’t global optima, which makes optimization a difficult task in general: it’s not enough just to look at the direction in which you can improve the most by changing things a little (gradient ascent), but sometimes you might need to traverse an uncanny valley and change things a lot to get to a better local optimum, e.g. if you’re at a point in your life where you’ve made all of the small improvements you can, you may need to do something drastic like quit your job and find a better one, which will temporarily make your life worse, in order to eventually make your life even better.
The reason variance in financial investments matters, even if you only care about expected utility, is that utility isn’t a linear function of money. Your improvement in the ability to do something is usually not linear in the amount of time you put into practicing it (at some point you’ll hit diminishing marginal returns). And so forth.
Related: although many distributions are normal, It seems that people often assume an unfamiliar distribution must be either normal or uniform). Multimodal in particular seems to be an oft neglected possibility.
In particular, I’ll mention that most tech development curves are logistic.
Jordan Ellenberg discusses this phenomenon at length in _How Not to Be Wrong: The Power of Mathematical Thinking_. See here for some relevant quotes (a blog post by one of the targets of Ellenberg’s criticism).
It’s not just that. Even if your utility function with regards to money were linear, it would still be wise to try to decrease variance, because high variance makes returns not compound as well.
If you value yourself getting money in the future, then that should be taken into account in your utility function.
Not if you used the geometric mean
Your second paragraph could benefit from the concept of simulated annealing.
There are lots of metaheuristic optimization methods; simulated annealing is the easiest one to explain and implement, but consequently it’s also the dumbest of them.
What are the best ones?
I’m personally a fan of tabu search, which prevents cycling and getting stuck in local optima by remembering features of previously seen solutions, and not visiting any solution with those features for some set length of time. “Best” depends on the particular problem, though; there are situations when the easy implementation of simulated annealing makes it a superior solution to a cleverer but longer implementation of something else.
I thought about mentioning simulated annealing, but then it seemed to me like simulated annealing is more involved than the basic concept I wanted to get across (e.g. the cooling phase is an extra complication).
But it’s actually important to the example. If someone intends to allocate their time searching for small and large improvements to their life, then simulated annealing suggests that they should make more of the big ones first. (The person you describe has may not have done this, since they’ve settled into a local optimum but now decide to find a completely different point on the fitness landscape, though without more details it’s entirely possible they’ve decided correctly here.)
I propose making an analogy to Split & Merge Expectation Maximization (SMEM) instead.
It’s a very efficient algorithm for modelling that operates as follows:
1) perform EM to find the local optimum
2) examine the clusters to determine which two are most similar, and combine them
3) examine the clusters to determine which one is least representative of the data it’s supposed to describe, and break it into two
4) perform EM on the three adjusted clusters
5) Repeat 1-4 until the change in likelihood between iterations drops below some epsilon.
I think this is actually quite isomorphic to Goal Factoring (from Geoff Anders / CFAR) in that you’re trying to combine things that are similar and then break up things that are inefficient. At least, I spent an entire summer working on an SMEM clustering program (though some of that was UI) and they feel similar to me.
Simulated annealing is one of many techniques for optimizing in search spaces with a lot of local maxima. Is there a reason why you’re emphasizing that one in particular?
It provides a usefully concept, which can be carried over into other domains. I suppose there are other techniques that use a temperature, but I’m much less familiar with them and they are more complicated. Is understanding other metaheuristics more useful to people who aren’t actually writing a program preforms some optimization than just understanding simulated annealing?
Regression to the Mean
In usability, the concept of Affordance is pretty useful.
I never really consciously thought about the idea of status until I came across it in this forum. It helped me identify and understand some behaviors in myself and others which puzzled me before.
Yes, me too. But it is Hanson’s idea as far as I know. From now lesswrong’s sister forum.
I’m pretty sure the idea has been around since long before Robin Hanson started writing about it.
I wouldn’t be surprised either way. That it was, or that it wasn’t around before. Still, this status-signaling view may be wrong sometimes. Or useless.
Status is far older than Hanson’s take on it, or than Hanson himself. But the idea of seeing status signalling everywhere, as an explanation for everything—that is characteristically Hanson. (Obviously, don’t take my simplification seriously.)
The idea of talking about seeing status signaling everywhere is characteristically Hanson. I would not be surprised in the least if many smart politicians and socialites throughout history had also observed this but had the good sense not to talk about it in public.
Haven’t pickup artists been explicitly discussing status signalling in the context of day-to-day, person-to-person interactions since the late nineties? And biologists have noted its pervasiveness all through the mid-to-late 20th centuries, at least. I’m sure cultural anthropologists too, but I’m not as familiar with that literature. Nor am I, however, with any of Hanson’s posts on the subject, but a quick glance at the links on the LW wiki’s page puts them all in the mid 2000s, with nothing popping out as unusual. But I also couldn’t find anything of his that suggested status to be “an explanation for everything” (I’m guessing everything here means all human behavior? Though that too seems really unlikely), so maybe I wasn’t looking in the right places.
From off site:
Energy and Focus is more scarce than Time (at least for me), Be Specific (somewhat on site, but whatever),
From on the site:
Mind Projection Fallacy, Illusion of Transparency, Trivial Inconveniences, Goals vs. Roles, Goals vs. Urges
Extracting examples from some of my past comments: proving too much, selection bias, Nash equilibrium, denotation & connotation, insight & intuition as recognition.
Others from game theory & economics: free riders & hold outs, the tragedies of the commons & the anticommons, precommitment, coordination games, average-marginal confusion, thinking at the margin.
Two more that’re a bit Sequences-esque but which I like so much and use so often I’ll highlight them anyway:
Reference class forecasting. It doesn’t just help one beat the planning fallacy by predicting durations; it can predict probabilities too, and one can apply it to other people as well. Will a flaky friend show up to a meal? Run a reference class forecast. Am I likely to get a “yes” if I ask someone for such & such a favour? Run a reference class forecast. How likely is it that a claim an acquaintance has just made is true? You get the idea. (Guess I should throw in reference class tennis as well. Fortunately, just as with actual tennis, it’s hard to play reference class tennis on my own, so reference class tennis isn’t too big a risk when I do solo reference class forecasts.)
The typical mind fallacy, which I deliberately use as a heuristic. If I’m trying to explain someone else’s behaviour (or figure out an aspect of someone else’s psychology more generally), I use myself as a model and ask myself why I might do the same thing. This may give an incorrect answer, but it gets me an answer faster than trying to derive someone else’s behaviour from first principles, and I can always introspect further to guess how well my self-derived answer might generalize to others.
Slightly less well-known cognitive biases: just world fallacy, mean world syndrome, and that one bias that really needs a name where people underestimate mundane risks and overestimate dramatic risks.
Statistics: skewness (and medians as outlier-&-skew-resistant averages), standard error (and error bars more generally), systematic error vs. random error, effect size, power, meta-analysis, bootstrapping & Monte Carlo methods, sampling methods.
Are you distinguishing between this phenomenon and the availability heuristic?
Now you prompt me to reflect on it: yes, I effectively am. The bias with no name is just a special case of the availability heuristic, but it’s sufficiently common/salient in my experience that my mind’s tacitly upgraded it to the status of a free-standing concept.
Credibility also can be useful. Most importantly: Are the threats, precommitments and offers you make credible? Could and would you actually go through with them if you found yourself in a situation where the conditions you stated are fulfilled? If you arrange the exchange in such a way that acting on your words imposes a low cost (better yet: no cost) on yourself, you’ll gain lots of bargaining power.
A quick example: When educating children, misbehaviour has to have consequences. Now, you have to choose these punishments in such a way that they impose little organizational and emotional cost on yourself while being serious enough that kids want to avoid them (but, of course, also not too serious ;)). If done correctly, you’ll have to punish a few children a few times, but then they will have learned. If done incorrectly, you continuously threaten with punishment, but there is no clear line where you have to act, and put in such a situation you don’t even want to punish them, so the children will continue to misbehave.
Unfortunately, this quick example is something that teachers and school administration often don’t get. If you make a threat for misbehavior, you must follow through, otherwise you have seriously undermined your and every other teacher’s credibility, and then you will predictably get misbehavior on almost every lesson since now.
Unfortunately, the reality is often that teachers make empty threats, hoping that using big words will scare students, and when this does not work, they rationalize not following through by “they are just children” and “they need to get a second chance”, although some students already had literally hundreds of “second” chances. And the worst thing is when teacher insists on the punishment, but after a phone call from parent, the school administration overrides their decision. Of course students share their “success stories”, so the next day the whole school know about the winning strategy.
It is possible that their target audience is their superiors, not the students. Threat is cheap, punishment is expensive, and they can always report to their superiors (and possibly parents) “we do not condone this behavior, see, we threatened them with ”.
Thomas Schelling proposed a useful strategy: make small threats for small infractions, and then follow through on them. This gives credibility to your larger threats, without too much inconvenience for either party.
(And, of course, try to make the whole thing as predictable as possible; never be capricious with your own authority.)
The justice system of the old Soviet Union had, rather ironically, the following maxim:
From an article about the US justice system, but the relevance to misbehaving schoolchildren (or simply schoolchildren whose behaviour one doesn’t like) is obvious:
Interestingly, these correspond to delay, expectancy and value in the procrastination equation. It’s interesting to see “negative” values used to form a kind of anti-motivation.
Awesome, this is worth its own post IMO.
I hadn’t noticed that. That’s a pretty shrewd connection! Come to think of it, the “excessive discounting”/”excessive present-orientation” Kleiman mentions is suspiciously similar to the procrastination equation’s remaining term, impulsiveness.
I wonder whether criminologists discovered this independently of psychologists & neuroscientists? Might be an example of two parts of academia converging on the same answer from different directions.
The concept of privilege of the “check your...” variety. It’s not without it’s problems as a tool—it can too easily be used as a Fully General Counterargument—but it’s an important thing to be aware of and probably the single concept I’ve learned in the last two years that has most changed my outlook on the world.
“Shibboleths,” said Will Newsome, somewhat passive-aggressively.
...
This gets irony points, since the real (well, modern Hebrew) pronunciation is “Shibolet”, and “Shiboleth” would identify you as a foreigner.
Yes, the original distinction was between “Sibolet” and “Shibolet”. “Th” isn’t even a sound that exists in Hebrew.
Robin Hanson’s marginal charity has been quite useful for me.
Especially, when I use it for doing marginal charity towards myself. Example: I’m eating a slice of cake. The difference in utility for me between eating 3/4ths of it and all of it is small. So if I leave a 1/4th of it for future-me, the difference in utility for future-me is much larger since it is the difference between having no cake and having some cake.
Various bits from Dennett: cranes versus skyhooks, the Cartesian Theatre, ‘figment’, the intentional stance, the notion of an intuition pump.
Braitenberg’s Vehicles does something indescribably non-verbal. It’s a tiny book that passes through your mind silently, like a ninja, and then any lingering shreds of vitalism you have just sort of explode and blow away on the wind. Every child should have a copy (and robot parts to use it with).
Seconding Braitenberg’s Vehicles (if you can’t get the book, see the links here).
I’ll third it. http://web.mit.edu/~luke_h/www/BraitenbergVehicles.pdf
There is also a variety of demos/software, under Google:”braitenberg vehicles simulation” not sure which one is worth trying.
Dennett—Competence without Comprehension. The magic that is real is not real magic.
Links?
Added some, sorry. Cranes vs. skyhooks is the central metaphor of Darwin’s Dangerous Idea but doesn’t seem to have its own page anywhere; it seems to me akin to EY’s notion of “follow-the-improbability”.
“Figment” is from Consciousness Explained; it’s sort of the notion of a volume of imagined colour. Dennett used it in a discussion of e.g. the retinal blind spot, to make the distinction between the brain’s actively “filling in” a missing signal (which he takes to be a confused reification) versus the brain simply not caring about the missing signal.
Cost of delay. If you have two projects which when finished will generate (say) the same amount of value per time and it takes 2 weeks to do the first and 1 week to do the second, by sequencing them in the other order you gain 1 weeks’ worth of value. If instead you interleave them, working “in parallel”, with no real gains in total time, you lose 1 weeks’ worth of value. Also and more obviously, if one project generates way more value per time, finish it first. If a project loses value while not finished, consider finishing it first. (for example taking an idea to market—your market research undergoes depreciation)
This is an application of opportunity cost?
The tradeoff between efficiency and accuracy. It’s essential for computational modeling, but it also comes up constantly in my daily life. It keeps me from being so much of a perfectionist that I never finish anything, for instance.
Fail-fast.
Can you expand on this? How has this concept helped you understand things?
It’s not so much a way to understand the world as it is a deep engineering principle. You want to design systems in such a way that if there is a problem, the system fails sooner rather than later. One very counter-intuitive technique used in software engineering is the assertion, which has no effect other than to shut the system down if a certain expression evaluates to false. Naively, it seems crazy that we would ever want to add to the set of conditions that will cause a crash—but that is exactly the point.
Failfast is part of the more general strategy expressed by the slogan tighten your feedback loops. If you are doing something stupid, you want to know as soon as possible. This is hard for humans to do, since we have a natural aversion to bad news and discipline.
Moral Foundations theory (all moral rules in all human cultures appeal to the six moral foundations: care/harm, fairness/cheating, liberty/oppression,loyalty/betrayal, authority/subversion, sanctity/degradation). This makes other people’s moralities easier to understand, and is an interesting lens through which to examine your own.
The Big Five Personality Traits—though I’ve heard these don’t seem to fit non-Westerners very well. Probably still useful when thinking about Westerners. (For example, when evaluating someone as a romantic partner or a business partner in some risky venture, I find it useful to deliberately consider their neuroticism. Or when considering suggesting someone try traveling or anything adventurous, their Openness To Experience is probably relevant.)
A teleological, non-reductionist worldview, supposedly traceable through Plato, Aristotle, Augustine, and Aquinas is wrong, but is a useful concept to be aware of because some people think it’s correct. It’s related to why some people, particularly some religious people, oppose homosexuality. Edit: I should add that I’m not recommending an in-depth study of this concept, just reading a few blog posts on it, and then more if it’s interesting to you or if you really need to engage with believers for some reason.
Any chance you have a source for more information on that? Seems interesting.
I’d skimmed a paper from a couple of years ago, How Universal Is the Big Five? Testing the Five-Factor Model of Personality Variation Among Forager–Farmers in the Bolivian Amazon. Abstract:
Very interesting. Thank you.
Could you explain this? Also, the Platonic and Aristotelian theories are very different. For instance, Aristotle believes that forms exist only in concrete (and strictly speaking, living) things, and that there are no forms of mathematical objects.
I’m no expert on this, but I refer you to Yvain’s series on The Last Superstition by Ed Feser: one, two, three, four. As Yvain quotes Feser:
I think the idea is that a homosexual human is a toothpaste-eating squirrel—the instantiation has deviated from its ideal form. And unlike squirrels, humans can reason and choose whether to act in conformity with our supposed nature.
If you assume some unsubstantiated premises, I guess this makes sense. And that’s why people are talking past each other when they argue about whether homosexuality is natural. The theist claims homosexuality isn’t natural, taking “natural” to mean “conforming to the ideal form”. The liberal points to homosexuality in animals, taking natural to mean “appearing in nature”.
(I may have horribly distorted this—I haven’t read the book.)
Edit: Here’s a theist talking about the book, in case you want an explanation from a believer.
I see, thanks for the links. I think it might be more accurate to refer to Feser’s theory of forms, or a thomistic theory of forms in the great-grand-parent comment. These arguments aren’t closely related to anything in Plato or Aristotle’s actual writings. And needless to say, Aristotle and Plato were not Christians and had no particular interest in the issue of homosexuality.
Fair enough! I’ve edited it.
[META]
I suggest moving “repository” threads to Main, because they are threads where it would still make sense to add new entries arbitrarily long after they are originally posted, and ISTM that new comments in old threads are more likely to be noticed in Main than in Discussion.
Both positive and negative black swans . Aditionally: randomness and regression to the mean.
(For some reason I was under the impression that the phrase “black swan” only applied to negative ones, but on reading the WP article I can’t find anything to that effect, so I guess I was wrong.)
Here’s a bunch of useful business concepts, as put together by Investling.
Supervenience. For example, “the mind supervenes on the brain.” The idea here is that a mind cannot be different without the corresponding brain being different as well.
I have a list of ~500 concepts / “patterns in reality” that I think it would be useful to have “handles” to, by having words/phrases for them or just having them as mentally-cached concepts. (Once I started actively looking for such concepts I found quite a lot of them.) I’m planning to someday-soon sit down and actually make up words/phrases for each of these; I’ve made up words/phrases for 37 of them, and my notes-to-self are already peppered with them. Unfortunately I don’t have the time right now to put them all here, but I probably will when my notes are more organized (like, when I’ve actually made up the words).
One example of such a concept: “imposing conditions that would have been evidence about optimal behavior in the EEA, with the intent of making oneself more prone to that behavior”—I call this “savannah-signalling” and it comes up in a lot of places.
D’you think it’d be worth posting the 37 you’ve invented names for? My appetite’s been whetted!
I’d be curious to see some of these as well!
Three years later- have you found the time? I’m really curious to know the rest of these.
Someone put together a nice list of concepts based on Charlie Munger’s “mental models”
http://www.thinkmentalmodels.com/
tangent: System 1 seems to control how “profound”, and thus likely-to-apply-in-the-future, any given concept feels. Venkatesh Rao has written a piece on this I can’t find right now, but the gist was that we glom onto concepts that allow more efficient mental organization. For example discovering that two phenomenon we thought were separate are actually sub-cases of some more basic phenomenon. An important point is that we do this speciously, as our pattern recognition is overactive (it is worth false alarms when checking for leopards). This predicts wide ranging failures such as religion, policy wonkery, conspiracy theories, etc.
Anyway, the point is that my process for finding, evaluating, and adding such concepts to my permanent repository of cognitive tools is not well defined and this bothers me. I’ve tried explicitly adding concepts to my permanent toolbox without being positive they will be helpful, for example when I used Anki (spaced repetition software) to help remember biases and fallacies. I found it hard to stick with this even though it did in fact seem to help me notice when making specific errors more. So I guess what I’m basically asking is why aren’t we spending a lot more time improving the checklist of rationality habits, especially via empiricism.
I like to do this a lot in mathematics, but fortunately mathematical language is both rich and rigorous enough that I can avoid false alarms in that context (category theory in particular is full of examples of phenomena that look separate but can rigorously be shown to be subcases of a more basic phenomenon).
I think of mathematics as being a conspiracy theorist’s fantasy land: it works nearly the way a conspiracy theorist thinks reality works.
Well, that’s something like what CFAR is trying to do.
Vipul Naik’s concept of a twofer: the attempt to undermine a claim by combining two objections that would be substantiated by opposite empirical facts, and by arguing that one of the objections must hold regardless of how the facts turn out to be.
I like the concept, I don’t like the name. “Twofer” is getting two of something for the price of one and I don’t think the word maps neatly to the concept.
That’s not necessarily a fallacy. That’s more of an application of the law of the excluded middle.
Who said it was a fallacy? Seems like a valid argument to me.
The article you linked to for one.
I’m not trying to be dense, but where does the article claim that “twofers” are fallacies? Can you quote the relevant passage?
EDIT: FWIW, I asked Vipul and he denies that the article makes such a claim. He also notes that the argument would be fallacious if its proponent argued that both of the empirical facts obtain (rather than arguing that one of the two must obtain).
It doesn’t say that explicitly, but he doesn’t sound very convinced that the argument is valid either; but hey, word of God.
My personal “interesting concepts repository” lists are probably grossly inappropriate to post here, since they’ve got thousands of entries, and they’re not accumulated systematically or sorted well or even specifically selected for usefulness. In fact many entries might even be anti-useful if examined uncritically, since they were saved as reminders that “a lot of people think/thought X” despite not satisfying “I think X”. I’m also hesitant to post the lists on the web for that latter reason; who knows what idiot in HR might someday decide to Google my username and freak out when they find it attached to a list with a bunch of crimethink entries?
Nevertheless, it occurs to me that people who decided to read through this thread might still find those lists interesting despite the many caveats. Email me if anyone wants a copy.
Learning about Neurobiology. I found the more I know about how the brain works, the more cognitive science makes sense.
People assume memories are stored in one region of the brain. From the inside, it feels like all this knowledge is obviously coming from one place. Factual information about an elephant (weight, where it lives, etc) is related the mental image of an elephant (gray skin, has big ears and a trunk,) but brains store that information in completely different places.
Ah, conceptual gardening. Meta, the fourth wall, diagonalization, God, Tathāgata/Tathāgatagarbha. Optimization, credit assignment, signal/noise. Marginalization, opportunity cost, comparative advantage. Incentive, affordance, salience, Schelling point. Likelihood ratio. Compilers/universality. Reversibility). Measure, decision-theoretic-significance/policy-relevance, justification. Levels of organization, levels of abstraction, level-crossing. Logos, Platospace. Pattern attractors, emergence, teleology. Timefulness/timelessness, self-fulfilling prophecy, spurious proofs, quietism/grace. Self-defeatingness, self-consistency, categorical imperative, indexicality. Recursion, self-reference, reflection.
I think I don’t really understand ‘meta’.
Meta happens when we go from talking about X, to talking about talking about X. Some examples:
Wikipedia’s example: “This debate isn’t going anywhere.” If we’ve been talking about politics, now we’re talking about our discussion of politics.
A self-hosting compiler is a compiler for language X that is written in language X. So it parses, optimizes, and “understands” programs written in X; and can be run on itself.
This is not a pipe.
A meta-joke is a joke about jokes, or one whose humor relies on the hearer’s expectation that a particular kind of joke is being told, with the punch line revealing that it’s not that sort of joke at all.
“Meta” is also used to mean “at a higher level of organization”. For instance, if there is a competitive game G, then the metagame of G is the interaction of different styles of play in a given pool of players.
Thanks, that’s helpful.
Can you be more specific? What don’t you understand?
I’m not sure I can be much more specific. In some cases, the distinction between X and meta-X is transparent to me, such as when people on LW begin talking about the discussions on LW, they are ‘going meta’. I get that.
But especially in the case of things like meta-ethics or meta-physics, I often get lost. I can’t come up with a general rule for distinguishing meta-X from plain old X. Maybe there is no such general rule, and it’s a case by case thing.
“Meta” as used in “metaphysics” doesn’t have the same meaning as “meta” as it is commonly used on LW. Quoth Wikipedia:
“Metaethics” is also not quite the same meaning, although I guess you can cash it out as “the ethics of ethics.”
Well, that’s the etymology of ‘meta’ as applied to metaphysics, but that’s not it’s meaning. No one today uses the term ‘metaphysics’ to refer to a book anthologized after a work on physics.
That’s a very interesting characterization. Could you expand on that a bit, if you have the time?
But they sort of do, right? They use “metaphysics” to refer to a tradition in philosophy started by what Aristotle talked about in the books anthologized after his works on physics. My point is I certainly would not cash out “metaphysics” to “the physics of physics” in the same way that I would cash out “metamathematics” to “the mathematics of mathematics.”
If you think ethics is the study of the question “what should we do?” then metaethics is the study of the question “how should we determine what we should do?”
This is merely a curiosity, but I think it does make non-negligible sense to think of metaphysics as the physics of physics. Abstracting from regularities of experience to regularities of regularities of experience. Metaphysics tells us what laws of physics are logically possible in the same way physics tell us what patterns of experience are physically likely.
Ah, I think the characterization ‘ethics of ethics’ or ‘physics of physics’ is misleading then: you don’t mean a physical theory of physical theories, or an ethical theory of ethical theories. This makes it sound like the meta-study asks the same questions as the object level study, only about the object level study. But I take it you mean that the meta-level study asks different kinds of questions; in your example, meta-ethics asks epistemological questions about ethics, while ethics asks ethical ones about the immediate objects of study.
No, I think metaethics asks ethical questions about ethics. Those ethical questions may get reduced to epistemological questions, but the epistemological questions are only important because they’re supposed to help answer the ethical question “how should we determine what we should do?” (I’m thinking of “should” here as an ethical modal operator.)
Whatsoever thy hand findeth to do, do it with thy might; for there is no work, nor device, nor knowledge, nor wisdom, in the grave, whither thou goest. - Ecclesiastes 9:10.
Good quote, but this isn’t the right place to post it.