Living Luminously
The following posts may be useful background material: Sorting Out Sticky Brains; Mental Crystallography; Generalizing From One Example
I took the word “luminosity” from “Knowledge and its Limits” by Timothy Williamson, although I’m using it in a different sense than he did. (He referred to “being in a position to know” rather than actually knowing, and in his definition, he doesn’t quite restrict himself to mental states and events.) The original ordinary-language sense of “luminous” means “emitting light, especially self-generated light; easily comprehended; clear”, which should put the titles into context.
Luminosity, as I’ll use the term, is self-awareness. A luminous mental state is one that you have and know that you have. It could be an emotion, a belief or alief, a disposition, a quale, a memory—anything that might happen or be stored in your brain. What’s going on in your head? What you come up with when you ponder that question—assuming, nontrivially, that you are accurate—is what’s luminous to you. Perhaps surprisingly, it’s hard for a lot of people to tell. Even if they can identify the occurrence of individual mental events, they have tremendous difficulty modeling their cognition over time, explaining why it unfolds as it does, or observing ways in which it’s changed. With sufficient luminosity, you can inspect your own experiences, opinions, and stored thoughts. You can watch them interact, and discern patterns in how they do that. This lets you predict what you’ll think—and in turn, what you’ll do—in the future under various possible circumstances.
I’ve made it a project to increase my luminosity as much as possible over the past several years. While I am not (yet) perfectly luminous, I have already realized considerable improvements in such subsidiary skills like managing my mood, hacking into some of the systems that cause akrasia and other non-endorsed behavior, and simply being less confused about why I do and feel the things I do and feel. I have some reason to believe that I am substantially more luminous than average, because I can ask people what seem to me to be perfectly easy questions about what they’re thinking and find them unable to answer. Meanwhile, I’m not trusting my mere impression that I’m generally right when I come to conclusions about myself. My models of myself, after I stop tweaking and toying with them and decide they’re probably about right, are borne out a majority of the time by my ongoing behavior. Typically, they’ll also match what other people conclude about me, at least on some level.
In this sequence, I hope to share some of the techniques for improving luminosity that I’ve used. I’m optimistic that at least some of them will be useful to at least some people. However, I may be a walking, talking “results not typical”. My prior attempts at improving luminosity in others consist of me asking individually-designed questions in real time, and that’s gone fairly well; it remains to be seen if I can distill the basic idea into a format that’s generally accessible.
I’ve divided up the sequence into eight posts, not including this one, which serves as introduction and index. (I’ll update the titles in the list below with links as each post goes up.)
You Are Likely To Be Eaten By A Grue. Why do you want to be luminous? What good does it do, and how does it do it?
Let There Be Light. How do you get your priors when you start to model yourself, when your existing models are probably full of biases?
The ABC’s of Luminosity. The most fundamental step in learning to be luminous is correlating your affect, behavior, and circumstance.
Lights, Camera, Action! Luminosity won’t happen by itself—you need to practice, and watch out for key mental items.
The Spotlight. Don’t keep your introspection interior. Thoughts are slippery. Label and organize whatever you find in your mind.
Highlights and Shadows. As you uncover and understand new things about yourself, it’s useful to endorse and repudiate your sub-components, and then encourage or interrupt them, respectively.
City of Lights. It’s a handy trick to represent yourself as multiple agents when dealing with tensions in yourself.
Lampshading. When you have models, test them—but rig your experiments!
Bonus posts!
Ureshiku Naritai: A story of how I used luminosity to raise my happiness set point.
On Enjoying Disagreeable Company: a luminosity-driven model of how to like people on purpose.
Seven Shiny Stories: concrete fictional descriptions of luminosity techniques from this sequence in action. (NOTE: Several people remark that SSS dramatically improved their understanding of the sequence. It may be indicated to read each Shiny Story concurrently with its associated post. The Shiny Stories each open with links to the relevant segment, and commenter apophenia has cleverly crossposted the stories under the top posts.)
I have already written all of the posts in this sequence, although I may make edits to later ones in response to feedback on earlier ones, and it’s not impossible that someone will ask me something that seems to indicate I should write an additional post. I will dole them out at a pace that responds to community feedback.
- Ureshiku Naritai by Apr 8, 2010, 8:08 PM; 242 points) (
- References & Resources for LessWrong by Oct 10, 2010, 2:54 PM; 168 points) (
- Seven Shiny Stories by Jun 1, 2010, 12:43 AM; 144 points) (
- Polyhacking by Aug 28, 2011, 8:35 AM; 121 points) (
- Abnormal Cryonics by May 26, 2010, 7:43 AM; 79 points) (
- You Are Likely To Be Eaten By A Grue by Mar 17, 2010, 1:18 AM; 78 points) (
- Conflicts Between Mental Subagents: Expanding Wei Dai’s Master-Slave Model by Aug 4, 2010, 9:16 AM; 71 points) (
- Less Wrong EBook Creator by Aug 13, 2015, 9:17 PM; 61 points) (
- Let There Be Light by Mar 17, 2010, 7:35 PM; 60 points) (
- City of Lights by Mar 31, 2010, 11:30 PM; 55 points) (
- The ABC’s of Luminosity by Mar 18, 2010, 9:47 PM; 50 points) (
- The Spotlight by Mar 24, 2010, 11:43 PM; 49 points) (
- Lights, Camera, Action! by Mar 20, 2010, 5:29 AM; 45 points) (
- Designing Rationalist Projects by May 12, 2011, 3:38 AM; 41 points) (
- Teaching Introspection by Aug 1, 2011, 1:10 AM; 32 points) (
- Response by Nov 6, 2022, 1:03 AM; 29 points) (
- Highlights and Shadows by Mar 28, 2010, 8:56 PM; 27 points) (
- Shifting Load to Explicit Reasoning by May 7, 2011, 6:00 PM; 25 points) (
- Lampshading by Apr 6, 2010, 8:03 PM; 24 points) (
- Eluding Attention Hijacks by Apr 17, 2010, 3:23 AM; 24 points) (
- The peril of ignoring emotions by Apr 3, 2011, 5:15 PM; 22 points) (
- Luminosity (Twilight fanfic) discussion thread by Aug 25, 2010, 8:49 AM; 19 points) (
- Emotional regulation, Part I: a problem summary by Mar 5, 2012, 11:10 PM; 17 points) (
- Feb 4, 2014, 8:01 AM; 17 points) 's comment on Open Thread for February 3 − 10 by (
- Luminosity (Twilight Fanfic) Discussion Thread 3 by Dec 30, 2010, 2:37 PM; 17 points) (
- Jul 22, 2014, 4:13 PM; 16 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- How to Not Get Offended by Mar 23, 2013, 11:12 PM; 16 points) (
- Jul 15, 2010, 7:52 PM; 15 points) 's comment on Some Thoughts Are Too Dangerous For Brains to Think by (
- Apr 13, 2010, 7:52 PM; 13 points) 's comment on Ureshiku Naritai by (
- Nov 20, 2010, 1:34 PM; 12 points) 's comment on What I’ve learned from Less Wrong by (
- May 11, 2011, 12:14 AM; 10 points) 's comment on Holy Books (Or Rationalist Sequences) Don’t Implement Themselves by (
- Luminosity (Twilight fanfic) Part 2 Discussion Thread by Oct 25, 2010, 11:07 PM; 9 points) (
- Apr 20, 2010, 4:27 AM; 9 points) 's comment on Attention Lurkers: Please say hi by (
- Nov 29, 2011, 3:40 AM; 8 points) 's comment on How rationality can make your life more awesome by (
- Jun 13, 2016, 8:26 PM; 7 points) 's comment on Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas by (
- Mar 21, 2013, 12:06 PM; 7 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- Mar 31, 2011, 2:59 PM; 6 points) 's comment on Reading the Sequences before Starting to Post: Costs and Benefits by (
- May 7, 2010, 6:06 PM; 6 points) 's comment on Antagonizing Opioid Receptors for (Prevention of) Fun and Profit by (
- Meetup : Durham: Luminosity (New location!) by Apr 3, 2013, 3:03 AM; 6 points) (
- Response by Nov 6, 2022, 1:03 AM; 5 points) (EA Forum;
- Apr 21, 2014, 7:01 PM; 5 points) 's comment on Open thread, 21-27 April 2014 by (
- Apr 20, 2010, 4:25 AM; 5 points) 's comment on Attention Lurkers: Please say hi by (
- Jun 22, 2010, 6:02 AM; 4 points) 's comment on Welcome to Less Wrong! by (
- Apr 6, 2012, 3:51 PM; 4 points) 's comment on Welcome to Less Wrong! (2012) by (
- Feb 9, 2013, 6:18 AM; 4 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- Jul 2, 2011, 11:50 AM; 4 points) 's comment on Those who can’t admit they’re wrong by (
- Mar 7, 2012, 9:21 PM; 4 points) 's comment on Delicious Luminosity, Om Nom Nom by (
- Aug 10, 2011, 2:29 PM; 4 points) 's comment on Rationality and Relationships by (
- Jan 5, 2014, 5:43 PM; 4 points) 's comment on Of Gender and Rationality by (
- Sep 15, 2014, 9:25 AM; 3 points) 's comment on Causal decision theory is unsatisfactory by (
- Meetup : Atlanta September Meetup—Self Awareness by Sep 23, 2014, 1:02 AM; 3 points) (
- Jul 15, 2010, 5:56 PM; 3 points) 's comment on Open Thread: July 2010, Part 2 by (
- May 1, 2010, 3:52 PM; 3 points) 's comment on Rationality quotes: May 2010 by (
- Jan 14, 2014, 2:42 AM; 3 points) 's comment on AALWA: Ask any LessWronger anything by (
- May 26, 2010, 9:59 PM; 3 points) 's comment on On Enjoying Disagreeable Company by (
- Aug 11, 2014, 2:38 AM; 3 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- May 7, 2010, 8:12 PM; 3 points) 's comment on Antagonizing Opioid Receptors for (Prevention of) Fun and Profit by (
- Feb 15, 2012, 11:27 PM; 2 points) 's comment on “The Book Of Mormon” or Belief In Belief, The Musical by (
- Mar 22, 2010, 8:45 PM; 2 points) 's comment on What would you do if blood glucose theory of willpower was true? by (
- Oct 5, 2014, 1:40 PM; 2 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- Dec 1, 2012, 8:30 AM; 2 points) 's comment on One thousand tips do not make a system by (
- Should You Make a Complete Map of Every Thought You Think? by Nov 7, 2011, 2:20 AM; 2 points) (
- Jul 3, 2013, 12:39 PM; 2 points) 's comment on Open Thread, July 1-15, 2013 by (
- Apr 7, 2011, 10:46 PM; 1 point) 's comment on Reflections on rationality a year out by (
- Meetup : Chicago Rationality Reading Group—Luminosity Part 1 by Feb 16, 2017, 3:09 AM; 1 point) (
- May 14, 2011, 2:28 AM; 1 point) 's comment on Ask LessWrong: Design a degree in Rationality. by (
- Oct 26, 2012, 1:21 AM; 0 points) 's comment on Looking for alteration suggestions for the official Sequences ebook by (
- Dec 6, 2010, 12:04 PM; 0 points) 's comment on References & Resources for LessWrong by (
- Feb 15, 2015, 12:28 AM; 0 points) 's comment on Open thread, Feb. 9 - Feb. 15, 2015 by (
- Aug 15, 2012, 3:07 PM; 0 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- Feb 15, 2012, 3:50 PM; 0 points) 's comment on Avoid misinterpreting your emotions by (
- Apr 21, 2010, 6:45 PM; 0 points) 's comment on Attention Lurkers: Please say hi by (
- Sep 8, 2012, 2:36 AM; -10 points) 's comment on How to deal with someone in a LessWrong meeting being creepy by (
This preparation sounds great. Thank you for taking such care with the writing, and with providing this introduction. The idea of thorough, regulated introspection is new to me, and I’m looking forward to hearing from somebody who’s put a lot of thought into it.
A site where people (1) do deep original thinking, then (2) spend considerable time and effort to write accessibly about it, and (3) refine the ideas through civil discussion: all of these things are so rare that the combination of them on this site makes it the best philosophy/discussion forum I’ve ever been a part of.
Surely you mean The RGB’s of Luminosity. Ahem.
I like that you’re including forward links in your sequence. (I still think LW ought to automatically include adjacent-post-by-date-order links, too.)
I actually have things that start with A, B, and C, and I didn’t even have to contrive too hard.
Quick definition request: what’s an alief? Google shrugs at it.
An alief is an independent source of emotional reaction which can coexist with a contradictory belief. For example, the fear felt when a monster jumps out of the darkness in a scary movie is based on the alief that the monster is about to attack you, even though you believe that it cannot.
Searching for alief and belief together brought up this relevant PDF.
Thanks—just learning that concept has actually appreciably increased my (self) understanding.
In case it isn’t obvious to people: The name is a pun. If there are “b”-liefs there must be “a”-liefs. One way to think about an alief is as a kind of proto-belief.
Another one that I think has yet to escape Benton house is ‘cesire’, along the same lines.
All I’m finding on the Internets is Aimé Césaire—elaboration?
I would assume that cesire is a modified version of desire, possibly a tendency to act to further a certain cause even if you desire something else.
So would I; I would still like an elaboration.
It’s from p642 of the pdf you linked.
Thanks! It took me a while to sort of get a handle on the idea—I still didn’t get it when I posted the above comic.
Edit: The above comment. Geez, sleep-deprived much?
At the time that I encountered rationalist fiction, I thought it was interesting but not especially relevant.
Then I skimmed through the Sequences briefly and realized that I was already working out a concept extremely similar to this one, under a different name but with the same methods and goals. This convinced me that at least some people in this subculture probably knew what they are talking about.
Encountering a more developed concept of luminosity that looked like my previous concepts of “radical self-knowledge” also gives me a good place to link to when explaining the concept to the uninitiated and better keywords to search with when looking for books and articles. (It’s called heuristics and biases, not structural brain quirks...)
I have used similar techniques independently discovered to increase happiness*. I also frequently draw comment for being unusually self-aware.
Alicorn, thank you for writing this sequence. I like not feeling like the lone dissenter, however effective the methods actually are.
-* There was previously another statement here that it turns out was extremely premature. 6-10-12
You’re welcome :)
This sequence preview looks definitely promising...
...and, to a noob (that is, a me in the grip of Mind Projection Fallacy) screams “WEIRD SELF-HELP CULT” in huge neon letters. Anyone else notice this?
To a first approximation all non trivial advice on messing with the workings of your own head sounds weird; and self-help has a bad reputation because most of the people who consume it are losers, not winners looking to win harder. Also, honestly there are weirder, cultier things on the site, anti-deathism for one.
The rest of the sequence looks like it will be excellent. I think evidential introspection is a wonderful topic for this site.
FWIW, this is more commonly known as “cognitive behavioural therapy”, with focus on “schema therapy”.
I just reread these and they’re great! I didn’t think much of them at the time, but I seem to have internalized them and actually fixed some problems in my life as a result.
Thanks!
Brilliant idea for a series! I spend a lot of time thinking about this; trying to understand my thoughts and consequently hack them.
It’s really interesting how much variation there is in people’s ability to comprehend the origin of thoughts. Also it’s surprising how little control, or desire for control, some people have over their decisions. Certainly seems like something that can be learnt and changed over time. I’ve seen some significant improvements myself over the past 12 months without many exterior environmental changes.
The main hurdle I hit up against is confidence in my conclusions—introspection can’t be scientific by definition. I find it really difficult to measure improvement over time. Definitely interested to see how you deal with this!
What you observe via introspection, is not accessible to third parties, yes.
But you use those observations to build models of yourself. Those models can be made explicit and communicated to others. And they make predictions about your future behavior, so they can be tested.
This is just begging for more tests! ;)
I think “Which parts are “me”?” is quite relevant to this sequence.
That’s most relevant to “City of Lights”, wherein I will link to that very post.
This looks an interesting subject! Introspection is a bit of a difficult research assistant, but in some cases, the best that we have.
A minor point, you write that
and also that the term ‘luminosity’ is already in use in a related, but different sense. Would it then not be clearer to simple call it ‘self-awareness’? Or something else, say ‘lucidity’ (I’m sure there’s something better), if you want diverge from what’s normally meant with self-awareness.
Anyway, looking forward to the rest of the sequence.
I think it doesn’t hurt to have a term that calls up not only the notion of self-awareness, but also the attitude that Alicorn is creating about it. It will also help indicate the coherence of the sequence.
I love the standard that LessWrong.com sets for philosophy, and will be extremely pleased if this sequence can meet that standard on such an important topic!
Meta-cognition is the standard term for “luminosity”. The Wikipedia entry might be an interesting read. I have done a lot of mind hacking, myself. :)
If you gain root, do release the source code for your patches. You might think you’re just making some improvements, but… after a while, too many new improvements can become more like a new human operating system. You can become so different that people will not be able to understand you anymore.
Re-arranging your consciousness is serious business. Don’t take it lightly. Aside from the social consequences, there are also system design pitfalls.