Living Luminously
The following posts may be useful background material: Sorting Out Sticky Brains; Mental Crystallography; Generalizing From One Example
I took the word “luminosity” from “Knowledge and its Limits” by Timothy Williamson, although I’m using it in a different sense than he did. (He referred to “being in a position to know” rather than actually knowing, and in his definition, he doesn’t quite restrict himself to mental states and events.) The original ordinary-language sense of “luminous” means “emitting light, especially self-generated light; easily comprehended; clear”, which should put the titles into context.
Luminosity, as I’ll use the term, is self-awareness. A luminous mental state is one that you have and know that you have. It could be an emotion, a belief or alief, a disposition, a quale, a memory—anything that might happen or be stored in your brain. What’s going on in your head? What you come up with when you ponder that question—assuming, nontrivially, that you are accurate—is what’s luminous to you. Perhaps surprisingly, it’s hard for a lot of people to tell. Even if they can identify the occurrence of individual mental events, they have tremendous difficulty modeling their cognition over time, explaining why it unfolds as it does, or observing ways in which it’s changed. With sufficient luminosity, you can inspect your own experiences, opinions, and stored thoughts. You can watch them interact, and discern patterns in how they do that. This lets you predict what you’ll think—and in turn, what you’ll do—in the future under various possible circumstances.
I’ve made it a project to increase my luminosity as much as possible over the past several years. While I am not (yet) perfectly luminous, I have already realized considerable improvements in such subsidiary skills like managing my mood, hacking into some of the systems that cause akrasia and other non-endorsed behavior, and simply being less confused about why I do and feel the things I do and feel. I have some reason to believe that I am substantially more luminous than average, because I can ask people what seem to me to be perfectly easy questions about what they’re thinking and find them unable to answer. Meanwhile, I’m not trusting my mere impression that I’m generally right when I come to conclusions about myself. My models of myself, after I stop tweaking and toying with them and decide they’re probably about right, are borne out a majority of the time by my ongoing behavior. Typically, they’ll also match what other people conclude about me, at least on some level.
In this sequence, I hope to share some of the techniques for improving luminosity that I’ve used. I’m optimistic that at least some of them will be useful to at least some people. However, I may be a walking, talking “results not typical”. My prior attempts at improving luminosity in others consist of me asking individually-designed questions in real time, and that’s gone fairly well; it remains to be seen if I can distill the basic idea into a format that’s generally accessible.
I’ve divided up the sequence into eight posts, not including this one, which serves as introduction and index. (I’ll update the titles in the list below with links as each post goes up.)
You Are Likely To Be Eaten By A Grue. Why do you want to be luminous? What good does it do, and how does it do it?
Let There Be Light. How do you get your priors when you start to model yourself, when your existing models are probably full of biases?
The ABC’s of Luminosity. The most fundamental step in learning to be luminous is correlating your affect, behavior, and circumstance.
Lights, Camera, Action! Luminosity won’t happen by itself—you need to practice, and watch out for key mental items.
The Spotlight. Don’t keep your introspection interior. Thoughts are slippery. Label and organize whatever you find in your mind.
Highlights and Shadows. As you uncover and understand new things about yourself, it’s useful to endorse and repudiate your sub-components, and then encourage or interrupt them, respectively.
City of Lights. It’s a handy trick to represent yourself as multiple agents when dealing with tensions in yourself.
Lampshading. When you have models, test them—but rig your experiments!
Bonus posts!
Ureshiku Naritai: A story of how I used luminosity to raise my happiness set point.
On Enjoying Disagreeable Company: a luminosity-driven model of how to like people on purpose.
Seven Shiny Stories: concrete fictional descriptions of luminosity techniques from this sequence in action. (NOTE: Several people remark that SSS dramatically improved their understanding of the sequence. It may be indicated to read each Shiny Story concurrently with its associated post. The Shiny Stories each open with links to the relevant segment, and commenter apophenia has cleverly crossposted the stories under the top posts.)
I have already written all of the posts in this sequence, although I may make edits to later ones in response to feedback on earlier ones, and it’s not impossible that someone will ask me something that seems to indicate I should write an additional post. I will dole them out at a pace that responds to community feedback.
- Ureshiku Naritai by 8 Apr 2010 20:08 UTC; 233 points) (
- References & Resources for LessWrong by 10 Oct 2010 14:54 UTC; 167 points) (
- Seven Shiny Stories by 1 Jun 2010 0:43 UTC; 144 points) (
- Polyhacking by 28 Aug 2011 8:35 UTC; 121 points) (
- Abnormal Cryonics by 26 May 2010 7:43 UTC; 79 points) (
- You Are Likely To Be Eaten By A Grue by 17 Mar 2010 1:18 UTC; 78 points) (
- Conflicts Between Mental Subagents: Expanding Wei Dai’s Master-Slave Model by 4 Aug 2010 9:16 UTC; 71 points) (
- Less Wrong EBook Creator by 13 Aug 2015 21:17 UTC; 61 points) (
- Let There Be Light by 17 Mar 2010 19:35 UTC; 60 points) (
- City of Lights by 31 Mar 2010 23:30 UTC; 55 points) (
- The ABC’s of Luminosity by 18 Mar 2010 21:47 UTC; 50 points) (
- The Spotlight by 24 Mar 2010 23:43 UTC; 49 points) (
- Lights, Camera, Action! by 20 Mar 2010 5:29 UTC; 45 points) (
- Designing Rationalist Projects by 12 May 2011 3:38 UTC; 41 points) (
- Teaching Introspection by 1 Aug 2011 1:10 UTC; 32 points) (
- Response by 6 Nov 2022 1:03 UTC; 28 points) (
- Highlights and Shadows by 28 Mar 2010 20:56 UTC; 27 points) (
- Shifting Load to Explicit Reasoning by 7 May 2011 18:00 UTC; 25 points) (
- Lampshading by 6 Apr 2010 20:03 UTC; 24 points) (
- Eluding Attention Hijacks by 17 Apr 2010 3:23 UTC; 24 points) (
- The peril of ignoring emotions by 3 Apr 2011 17:15 UTC; 22 points) (
- Luminosity (Twilight fanfic) discussion thread by 25 Aug 2010 8:49 UTC; 19 points) (
- Emotional regulation, Part I: a problem summary by 5 Mar 2012 23:10 UTC; 17 points) (
- 4 Feb 2014 8:01 UTC; 17 points) 's comment on Open Thread for February 3 − 10 by (
- How to Not Get Offended by 23 Mar 2013 23:12 UTC; 16 points) (
- 15 Jul 2010 19:52 UTC; 15 points) 's comment on Some Thoughts Are Too Dangerous For Brains to Think by (
- Luminosity (Twilight Fanfic) Discussion Thread 3 by 30 Dec 2010 14:37 UTC; 15 points) (
- 7 Mar 2011 3:29 UTC; 13 points) 's comment on Positive Thinking by (
- 13 Apr 2010 19:52 UTC; 13 points) 's comment on Ureshiku Naritai by (
- 20 Nov 2010 13:34 UTC; 12 points) 's comment on What I’ve learned from Less Wrong by (
- 11 May 2011 0:14 UTC; 10 points) 's comment on Holy Books (Or Rationalist Sequences) Don’t Implement Themselves by (
- Luminosity (Twilight fanfic) Part 2 Discussion Thread by 25 Oct 2010 23:07 UTC; 9 points) (
- 20 Apr 2010 4:27 UTC; 9 points) 's comment on Attention Lurkers: Please say hi by (
- 29 Nov 2011 3:40 UTC; 8 points) 's comment on How rationality can make your life more awesome by (
- 13 Jun 2016 20:26 UTC; 7 points) 's comment on Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas by (
- 21 Mar 2013 12:06 UTC; 7 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 31 Mar 2011 14:59 UTC; 6 points) 's comment on Reading the Sequences before Starting to Post: Costs and Benefits by (
- 7 May 2010 18:06 UTC; 6 points) 's comment on Antagonizing Opioid Receptors for (Prevention of) Fun and Profit by (
- Meetup : Durham: Luminosity (New location!) by 3 Apr 2013 3:03 UTC; 6 points) (
- Response by 6 Nov 2022 1:03 UTC; 5 points) (EA Forum;
- 20 Apr 2010 4:25 UTC; 5 points) 's comment on Attention Lurkers: Please say hi by (
- 22 Jun 2010 6:02 UTC; 4 points) 's comment on Welcome to Less Wrong! by (
- 6 Apr 2012 15:51 UTC; 4 points) 's comment on Welcome to Less Wrong! (2012) by (
- 9 Feb 2013 6:18 UTC; 4 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 2 Jul 2011 11:50 UTC; 4 points) 's comment on Those who can’t admit they’re wrong by (
- 7 Mar 2012 21:21 UTC; 4 points) 's comment on Delicious Luminosity, Om Nom Nom by (
- 10 Aug 2011 14:29 UTC; 4 points) 's comment on Rationality and Relationships by (
- 5 Jan 2014 17:43 UTC; 4 points) 's comment on Of Gender and Rationality by (
- 15 Sep 2014 9:25 UTC; 3 points) 's comment on Causal decision theory is unsatisfactory by (
- Meetup : Atlanta September Meetup—Self Awareness by 23 Sep 2014 1:02 UTC; 3 points) (
- 15 Jul 2010 17:56 UTC; 3 points) 's comment on Open Thread: July 2010, Part 2 by (
- 1 May 2010 15:52 UTC; 3 points) 's comment on Rationality quotes: May 2010 by (
- 26 May 2010 21:59 UTC; 3 points) 's comment on On Enjoying Disagreeable Company by (
- 7 May 2010 20:12 UTC; 3 points) 's comment on Antagonizing Opioid Receptors for (Prevention of) Fun and Profit by (
- 15 Feb 2012 23:27 UTC; 2 points) 's comment on “The Book Of Mormon” or Belief In Belief, The Musical by (
- 22 Mar 2010 20:45 UTC; 2 points) 's comment on What would you do if blood glucose theory of willpower was true? by (
- 1 Dec 2012 8:30 UTC; 2 points) 's comment on One thousand tips do not make a system by (
- Should You Make a Complete Map of Every Thought You Think? by 7 Nov 2011 2:20 UTC; 2 points) (
- 3 Jul 2013 12:39 UTC; 2 points) 's comment on Open Thread, July 1-15, 2013 by (
- 7 Apr 2011 22:46 UTC; 1 point) 's comment on Reflections on rationality a year out by (
- Meetup : Chicago Rationality Reading Group—Luminosity Part 1 by 16 Feb 2017 3:09 UTC; 1 point) (
- 14 May 2011 2:28 UTC; 1 point) 's comment on Ask LessWrong: Design a degree in Rationality. by (
- 26 Oct 2012 1:21 UTC; 0 points) 's comment on Looking for alteration suggestions for the official Sequences ebook by (
- 6 Dec 2010 12:04 UTC; 0 points) 's comment on References & Resources for LessWrong by (
- 15 Feb 2015 0:28 UTC; 0 points) 's comment on Open thread, Feb. 9 - Feb. 15, 2015 by (
- 15 Aug 2012 15:07 UTC; 0 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 15 Feb 2012 15:50 UTC; 0 points) 's comment on Avoid misinterpreting your emotions by (
- 21 Apr 2010 18:45 UTC; 0 points) 's comment on Attention Lurkers: Please say hi by (
- 8 Sep 2012 2:36 UTC; -10 points) 's comment on How to deal with someone in a LessWrong meeting being creepy by (
This preparation sounds great. Thank you for taking such care with the writing, and with providing this introduction. The idea of thorough, regulated introspection is new to me, and I’m looking forward to hearing from somebody who’s put a lot of thought into it.
A site where people (1) do deep original thinking, then (2) spend considerable time and effort to write accessibly about it, and (3) refine the ideas through civil discussion: all of these things are so rare that the combination of them on this site makes it the best philosophy/discussion forum I’ve ever been a part of.
Surely you mean The RGB’s of Luminosity. Ahem.
I like that you’re including forward links in your sequence. (I still think LW ought to automatically include adjacent-post-by-date-order links, too.)
I actually have things that start with A, B, and C, and I didn’t even have to contrive too hard.
Quick definition request: what’s an alief? Google shrugs at it.
An alief is an independent source of emotional reaction which can coexist with a contradictory belief. For example, the fear felt when a monster jumps out of the darkness in a scary movie is based on the alief that the monster is about to attack you, even though you believe that it cannot.
Searching for alief and belief together brought up this relevant PDF.
Thanks—just learning that concept has actually appreciably increased my (self) understanding.
In case it isn’t obvious to people: The name is a pun. If there are “b”-liefs there must be “a”-liefs. One way to think about an alief is as a kind of proto-belief.
Another one that I think has yet to escape Benton house is ‘cesire’, along the same lines.
All I’m finding on the Internets is Aimé Césaire—elaboration?
I would assume that cesire is a modified version of desire, possibly a tendency to act to further a certain cause even if you desire something else.
So would I; I would still like an elaboration.
It’s from p642 of the pdf you linked.
Thanks! It took me a while to sort of get a handle on the idea—I still didn’t get it when I posted the above comic.
Edit: The above comment. Geez, sleep-deprived much?
At the time that I encountered rationalist fiction, I thought it was interesting but not especially relevant.
Then I skimmed through the Sequences briefly and realized that I was already working out a concept extremely similar to this one, under a different name but with the same methods and goals. This convinced me that at least some people in this subculture probably knew what they are talking about.
Encountering a more developed concept of luminosity that looked like my previous concepts of “radical self-knowledge” also gives me a good place to link to when explaining the concept to the uninitiated and better keywords to search with when looking for books and articles. (It’s called heuristics and biases, not structural brain quirks...)
I have used similar techniques independently discovered to increase happiness*. I also frequently draw comment for being unusually self-aware.
Alicorn, thank you for writing this sequence. I like not feeling like the lone dissenter, however effective the methods actually are.
-* There was previously another statement here that it turns out was extremely premature. 6-10-12
You’re welcome :)
This sequence preview looks definitely promising...
...and, to a noob (that is, a me in the grip of Mind Projection Fallacy) screams “WEIRD SELF-HELP CULT” in huge neon letters. Anyone else notice this?
To a first approximation all non trivial advice on messing with the workings of your own head sounds weird; and self-help has a bad reputation because most of the people who consume it are losers, not winners looking to win harder. Also, honestly there are weirder, cultier things on the site, anti-deathism for one.
The rest of the sequence looks like it will be excellent. I think evidential introspection is a wonderful topic for this site.
FWIW, this is more commonly known as “cognitive behavioural therapy”, with focus on “schema therapy”.
I just reread these and they’re great! I didn’t think much of them at the time, but I seem to have internalized them and actually fixed some problems in my life as a result.
Thanks!
Brilliant idea for a series! I spend a lot of time thinking about this; trying to understand my thoughts and consequently hack them.
It’s really interesting how much variation there is in people’s ability to comprehend the origin of thoughts. Also it’s surprising how little control, or desire for control, some people have over their decisions. Certainly seems like something that can be learnt and changed over time. I’ve seen some significant improvements myself over the past 12 months without many exterior environmental changes.
The main hurdle I hit up against is confidence in my conclusions—introspection can’t be scientific by definition. I find it really difficult to measure improvement over time. Definitely interested to see how you deal with this!
What you observe via introspection, is not accessible to third parties, yes.
But you use those observations to build models of yourself. Those models can be made explicit and communicated to others. And they make predictions about your future behavior, so they can be tested.
This is just begging for more tests! ;)
I think “Which parts are “me”?” is quite relevant to this sequence.
That’s most relevant to “City of Lights”, wherein I will link to that very post.
This looks an interesting subject! Introspection is a bit of a difficult research assistant, but in some cases, the best that we have.
A minor point, you write that
and also that the term ‘luminosity’ is already in use in a related, but different sense. Would it then not be clearer to simple call it ‘self-awareness’? Or something else, say ‘lucidity’ (I’m sure there’s something better), if you want diverge from what’s normally meant with self-awareness.
Anyway, looking forward to the rest of the sequence.
I think it doesn’t hurt to have a term that calls up not only the notion of self-awareness, but also the attitude that Alicorn is creating about it. It will also help indicate the coherence of the sequence.
I love the standard that LessWrong.com sets for philosophy, and will be extremely pleased if this sequence can meet that standard on such an important topic!
Meta-cognition is the standard term for “luminosity”. The Wikipedia entry might be an interesting read. I have done a lot of mind hacking, myself. :)
If you gain root, do release the source code for your patches. You might think you’re just making some improvements, but… after a while, too many new improvements can become more like a new human operating system. You can become so different that people will not be able to understand you anymore.
Re-arranging your consciousness is serious business. Don’t take it lightly. Aside from the social consequences, there are also system design pitfalls.