Flowers are selective about the pollinators they attract. Diurnal flowers must compete with each other for visual attention, so they use diverse colours to stand out from their neighbours. But flowers with nocturnal anthesis are generally white, as they aim only to outshine the night.
Emrik
wow. I only read the first 3 lines, and I already predict 5% this will have been profoundly usefwl to me a year from now (50% that it’s mildly usefwl, which is still a high bar for things I’ve read). among top 10 things I’ve learned this year, and I’ve learned a lot!
meta: how on earth was this surprising to me? I thought I was good at knowing the dynamics of social stuff, but for some reason I haven’t looked in this direction at all. hmm.
Oh! Well, I’m as happy about receiving a compliment for that as I am for what I thought I got the compliment for, so I forgive you. Thanks! :D
Another aspect of costs of compromise is: How bad is it for altruists to have to compromise their cognitive search between [what you believe you can explain to funders] vs [what you believe is effective]? Re my recent harumph about the fact that John Wentworth must justify his research to get paid. Like what? After all this time, anybody doubts him? The insistence that he explain himself is surely more for show now, as it demonstrates the funders are doing their jobs “seriously”.
So we should expect that neuremes are selected for effectively keeping themselves in attention, even in cases where that makes you less effective at tasks which tend to increase your genetic fitness.
Furthermore, the neuremes (association-clusters) you are currently attending to, have an incentive to recruit associated neuremes into attention as well, because then they feed each others’ activity recursively, and can dominate attention for longer. I think of it like association-clusters feeding activity into their “friends” who are most likely to reciprocate.
And because recursive connections between association-clusters tend to reflect some ground truth about causal relationships in the territory, this tends to be highly effective as a mechanism for inference. But there must be edge-cases (though I can’t recall any atm...).
Imagining agentic behaviour in (/taking intentional stance wrt) individual brain-units is great for generating high-level hypotheses about mechanisms, but obviously misfires and don’t try this at home etc etc.
Bonus point: neuronal “voting power” is capped at around ~100Hz, so neurons “have an incentive” (ie, will be selected based on the extent to which they) vote for what related neurons are likely to vote for. It’s analogous to a winner-takes-all-election where you don’t want to waste your vote on third-party candidates who are unlikely to be competitive at the top. And when most voters also vote this way, it becomes Keynesian in the sense that you have to predict[1] what other voters predict other voters will vote for, and the best candidates are those who seem the most like good Schelling-points.
That’s why global/conscious “narratives” are essential in the brain—they’re metabolically efficient Schelling-points.
- ^
Neuron-voters needn’t “make predictions” like human-voters do. It just needs to be the case that their stability is proportional to their ability to “act as if” they predicted other neurons’ predictions (and so on).
- ^
It seems generally quite bad for somebody like John to have to justify his research in order to have an income. A mind like this is better spent purely optimizing for exactly what he thinks is best, imo.
When he knows that he must justify himself to others (who may or may not understand his reasoning), his brain’s background-search is biased in favour of what-can-be-explained. For early thinkers, this bias tends to be good, because it prevents them from bullshitting themselves. But there comes a point where you’ve mostly learned not to bullshit yourself, and you’re better off purely aiming your cognition based on what you yourself think you understand.
Vingean deference-limits + anti-inductive innovation-frontier
Paying people for what they do works great if most of their potential impact comes from activities you can verify. But if their most effective activities are things they have a hard time explaining to others (yet have intrinsic motivation to do), you could miss out on a lot of impact by requiring them instead to work on what’s verifiable.
The people who are much higher competence will behave in ways you don’t recognise as more competent. If you were able to tell what right things to do are, you would just do those things and be at their level. Your “deference limit” is the level of competence above your own at which you stop being able to reliable judge the difference.
Innovation on the frontier is anti-inductive. If you select people cautiously, you miss out on hiring people significantly more competent than you.[1]
Costs of compromise
Consider how the cost of compromising between optimisation criteria interacts with what part of the impact distribution you’re aiming for. If you’re searching for a project with top p% impact and top p% explainability-to-funders, you can expect only p^2 of projects to fit both criteria—assuming independence.
But I think it’s an open question how & when the distributions correlate. One reason to think they could sometimes be anticorrelated [sic] is that the projects with the highest explainability-to-funders are also more likely to receive adequate attention from profit-incentives alone.[2]
Consider funding people you are strictly confused by wrt what they prioritize
If someone believes something wild, and your response is strict confusion, that’s high value of information. You can only safely say they’re low-epistemic-value if you have evidence for some alternative story that explains why they believe what they believe.
Alternatively, find something that is surprisingly popular—because if you don’t understand why someone believes something, you cannot exclude that they believe it for good reasons.[3]
The crucial freedom to say “oops!” frequently and immediately
Still, I really hope funders would consider funding the person instead of the project, since I think Johannes’ potential will be severely stifled unless he has the opportunity to go “oops! I guess I ought to be doing something else instead” as soon as he discovers some intractable bottleneck wrt his current project. (...) it would be a real shame if funding gave him an incentive to not notice reasons to pivot.[4]
- ^
Comment explaining why I think it would be good if exceptional researchers had basic income (evaluate candidates by their meta-level process rather than their object-level beliefs)
- ^
Comment explaining what costs of compromise in conjunctive search implies for when you’re “sampling for outliers”
- ^
Comment explaining my approach to finding usefwl information in general
- ^
- Jun 18, 2024, 6:35 PM; 5 points) 's comment on Fat Tails Discourage Compromise by (
- Jul 9, 2024, 5:28 AM; 2 points) 's comment on Introducing AI-Powered Audiobooks of Rational Fiction Classics by (
- ^
This relates to costs of compromise!
It’s this class of patterns that frequently recur as a crucial considerations in contexts re optimization, and I’ve been making too many shoddy comments about it. (Recent1[1], Recent2.) Somebody who can write ought to unify the many aspects of it give it a public name so it can enter discourse or something.
In the context of conjunctive search/optimization
The problem of fully updated deference also assumes a concave option-set. The concavity is proportional to the number of independent-ish factors in your utility function. My idionym (in my notes) for when you’re incentivized to optimize for a subset of those factors (rather than a compromise), is instrumental drive for monotely (IDMT), and it’s one aspect of Goodhart.
It’s one reason why proxy-metrics/policies often “break down under optimization pressure”.
When you decompose the proxy into its subfunctions, you often tend to find that optimizing for a subset of them is more effective.
(Another reason is just that the metric has lots of confounders which didn’t map to real value anyway; but that’s a separate matter from conjunctive optimization over multiple dimensions of value.)
You can sorta think of stuff like the Weber-Fechner Law (incl scope-insensitivity) as (among other things) an “alignment mechanism” in the brain: it enforces diminishing returns to stimuli-specificity, and this reduces your tendency to wirehead on a subset of the brain’s reward-proxies.
Pareto nonconvexity is annoying
From Wikipedia: Multi-Objective optimization:
Watch the blue twirly thing until you forget how bored you are by this essay, then continue.
In the context of how intensity of something is inversely proportional to the number of options
Humans differentiate into specific social roles because .
If you differentiate into a less crowded category, you have fewer competitors for the type of social status associated with that category. Specializing toward a specific role makes you more likely to be top-scoring in a specific category.
Political candidates have some incentive to be extreme/polarizing.
If you try to please everybody, you spread out your appeal so it’s below everybody’s threshold, and you’re not getting anybody’s votes.
You have a disincentive to vote for third-parties in winner-takes-all elections.
Your marginal likelihood of tipping the election is proportional to how close the candidate is to the threshold, so everybody has an incentive to vote for ~Schelling-points in what people expect other people to vote for. This has the effect of concentrating votes over the two most salient options.
You tend to feel demotivated when you have too many tasks to choose from on your todo-list.
Motivational salience is normalized across all conscious options[2], so you’d have more absolute salience for your top option if you had fewer options.
I tend to say a lot of wrong stuff, so do take my utterances with grains of salt. I don’t optimize for being safe to defer to, but it doesn’t matter if I say a bunch of wrong stuff if some of the patterns can work as gears in your own models. Screens off concerns about deference or how right or wrong I am.
I rly like the framing of concave vs convex option-set btw!
- ^
Lizka has a post abt concave option-set in forum-post writing! From my comment on it:
As you allude to by the exponential decay of the green dots in your last graph, there are exponential costs to compromising what you are optimizing for in order to appeal to a wider variety of interests. On the flip-side, how usefwl to a subgroup you can expect to be is exponentially proportional to how purely you optimize for that particular subset of people (depending on how independent the optimization criteria are). This strategy is also known as “horizontal segmentation”.
The benefits of segmentation ought to be compared against what is plausibly an exponential decay in the number of people who fit a marginally smaller subset of optimization criteria. So it’s not obvious in general whether you should on the margin try to aim more purely for a subset, or aim for broader appeal.
- ^
Normalization is an explicit step in taking the population vector of an ensemble involved in some computation. So if you imagine the vector for the ensemble(s) involved in choosing what to do next, and take the projection of that vector onto directions representing each option, the intensity of your motivation for any option is proportional to the length of that projection relative to the length of all other projections. (Although here I’m just extrapolating the formula to visualize its consequences—this step isn’t explicitly supported by anything I’ve read. E.g. I doubt cosine similarity is appropriate for it.)
Repeated voluntary attentional selection for a stimulus reduces voluntary attentional control wrt that stimulus
From Investigating the role of exogenous cueing on selection history formation (2019):
An abundance of recent empirical data suggest that repeatedly allocating visual attention to task-relevant and/or reward-predicting features in the visual world engenders an attentional bias for these frequently attended stimuli, even when they become task irrelevant and no longer predict reward. In short, attentional selection in the past hinders voluntary control of attention in the present. […] Thus, unlike voluntarily directed attention, involuntary attentional allocation may not be sufficient to engender historically contingent selection biases.
It’s sorta unsurprising if you think about it, but I don’t think I’m anywhere near having adequately propagated its implications.
Some takeaways:
“Beware of what you attend”
WHEN: You notice that attending to a specific feature of a problem-solving task was surprisingly helpfwl…
THEN: Mentally simulate attending to that feature in a few different problem-solving situations (ie, hook into multiple memory-traces to generalize recall to the relevant class of contexts)
My idionym for specific simple features that narrowly help connect concepts is “isthmuses”. I try to pay attention to generalizable isthmuses when I find them (commit to memory).
I interpret this as supporting the idea that voluntary-ish allocation of attention is one of the strongest selection-pressures neuremes adapt to, and thus also one of your primary sources of leverage wrt gradually shaping your brain / self-alignment.
Key terms: attentional selection history, attentional selection bias
- What do you view as core EA principles? by Aug 22, 2024, 1:56 AM; 21 points) (EA Forum;
- subfunctional overlaps in attentional selection history implies momentum for decision-trajectories by Dec 22, 2024, 2:12 PM; 19 points) (
- Nov 13, 2024, 10:31 PM; 8 points) 's comment on quila’s Shortform by (
Quick update: I suspect many/most problems where thinking in terms of symmetry can be more helpfwly reframed in terms of isthmuses[1]. Here’s the chain-of-thought I was writing which caused me to think this:
(Background: I was trying to explain the general relevance of symmetry when finding integrals.)
In the context of finding integrals for geometric objects¹, look for simple subregions² for which manipulating a single variable³ lets you continuously expand to the whole object.⁴
¹Circle
²Circumference,
³Radius,
See visualization.[2]
The general feature to learn to notice as you search through subregions here is: shared symmetries for the object and its subregion. hmmmmm
Actually, “symmetry” is a distracting concept here. It’s the “isthmus” between subregions you should be looking for.
WHEN: Trying to find an integral
THEN: Search for a single isthmus-variable connecting subregions which together fill the whole area
FINALLY: Integrate over that variable between those regions.
or said differently… THEN: Look for simple subregions which transform into the whole area via a single variable, then integrate over that variable.
Hm. This btw is in general how you find generalizations. Start from one concept, find a cheap action which transforms it into a different concept, then define the second in terms of the first plus its distance along that action.
That action is then the isthmus that connects the concepts.
If previously from a given context (assuming partial memory-addresses A and B), fetching A* and B* each cost you 1000 search-points separately, now you can be more efficient by storing B as the delta between them, such that fetching B only costs 1000+[cost of delta].
Or you can do a similar (but more traditional) analysis where “storing” memories has a cost in bits of memory capacity.
- ^
“An isthmus is a narrow piece of land connecting two larger areas across an expanse of water by which they are otherwise separated.”
- ^
This example is from a 3B1B vid, where he says “this should seem promising because it respects the symmetry of the circle”. While true (eg, rotational symmetry is preserved in the carve-up), I don’t feel like the sentence captures the essence of what makes this a good step to take, at least not on my semantics.
- Jun 4, 2024, 5:22 PM; 2 points) 's comment on Emrik Quicksays by (
This post happens to be an example of limiting-case analysis, and I think it’s one of the most generally usefwl Manual Cognitive Algorithms I know of. I’m not sure about its optimal scope, but TAP:
WHEN: I ask a question like “what happens to a complex system if I tweak this variable?” and I’m confused about how to even think about it (maybe because working-memory is overtaxed)…
THEN: Consider applying limiting-case analysis on it.
That is, set the variable in question to its maximum or lowest value, and gain clarity over either or both of those cases manually. If that succeeds, then it’s usually easier to extrapolate from those examples to understand what’s going on wrt to the full range of the variable.
I think it’s a usefwl heuristic tool, and it’s helped me with more than one paradox.[1] I also often use “multiplex-case analysis” (or maybe call it “entropic-case”), which I gave a better explanation of in the this comment.
- ^
A simple example where I explicitly used it was when I was trying to grok the (badly named) Friendship paradox, but there are many more such cases.
See also my other comment on all this list-related tag business. Linking it here in case you (the reader) is about to try to refactor stuff, and seeing this comment could potentially save you some time.
I was going to agree, but now I think it should just be split...
The Resource tag can include links to single resources, or be a single resource (like a glossary).
The Collections tag can include posts in which the author provides a list (e.g. bullet-points of writing advice), or links to a list.
The tag should ideally be aliased with “List”.[1]
The Repository tag seems like it ought to be merged with Collections, but it carves up a specific tradition of posts on LessWrong. Specifically posts which elicit topical resources from user comments (e.g. best textbooks).
The List of Links tag is usefwl for getting a higher-level overview of something, because it doesn’t include posts which only point to a single resource.
The List of Lists tag is usefwl for getting a higher-level overview of everything above. Also, I suggest every list-related tag should link to the List of Lists tag in the description. That way, you don’t have to link all those tags to each other (which would be annoying to update if anything changes).
I think the strongest case for merging is {List of Links, Collections} → {List}, since I’m not sure there needs to be separate categories for internal lists vs external lists, and lists of links vs lists of other things.
I have not thought this through sufficiently to recommend this without checking first. If I were to decide whether to make this change, I would think on it more.
I was going to agree, but now I think it should just be split...
The Resource tag can include links to single resources, or be a single resource (like a glossary).
The Collections tag can include posts in which the author provides a list (e.g. bullet-points of writing advice), or links to a list.
The tag should ideally be aliased with “List”.[1]
The Repository tag seems like it ought to be merged with Collections, but it carves up a specific tradition of posts on LessWrong. Specifically posts which elicit topical resources from user comments (e.g. best textbooks).
The List of Links tag is usefwl for getting a higher-level overview of something, because it doesn’t include posts which only point to a single resource.
The List of Lists tag is usefwl for getting a higher-level overview of everything above. Also, I suggest every list-related tag should link to the List of Lists tag in the description. That way, you don’t have to link all those tags to each other (which would be annoying to update if anything changes).
I think the strongest case for merging is {List of Links, Collections} → {List}, since I’m not sure there needs to be separate categories for internal lists vs external lists, and lists of links vs lists of other things.
I have not thought this through sufficiently to recommend this without checking first. If I were to decide whether to make this change, I would think on it more.
- ^
I realize LW doesn’t natively support aliases, but adding a section to the end with related search-terms seems like a cost-efficient half-solution. When you type into the box designed for tagging a post, it seems to also search the description of that tag (or does some other magic).
Aliases: collections, lists
I created this because I wanted to find a way to unite {List of Links, Collections and Resources, Repository, List of Related Sites, List of Blogs, List of Podcasts, Programming Resources} without linking each of those items to each other (which, in the absence of transclusions, also means you would have to update each link separately every time you added a new related list of lists).
But I accidentally caused the URL to be “list-of-lists-1”, because I originally relabelled List of Links to List of Lists but then changed my mind.
Btw, I notice the absence of a tag for lists (e.g. lists of advice that don’t link to anywhere and aren’t repositories designed to elicit advice from the comment section).
This is a common problem with tags it seems. Distillation & Pedagogy is mostly posts about distillation & pedagogy instead of posts that are distillations & pedagogies. And there’s a tag for Good Explanations (advice), but no tag for Good Explanations. Otoh, the tag for Technical Explanation is tagged with two technical explanations (yay!)… of technical explanations. :p
Merge with (and alias with) Intentionality?
I think hastening of subgoal completion[1] is some evidence for the notion that competitive inter-neuronal selection pressures are frequently misaligned with genetic fitness. People (me included) routinely choose to prioritize completing small subtasks in order to reduce cognitive load, even when that strategy predictably costs more net metabolic energy. (But I can think of strong counterexamples.)
The same pattern one meta-level up is “intragenomic conflict”[2], where genetic lineages have had to spend significant selection-power to prevent genes from fighting dirty. For example, the mechanism of meiosis itself may largely be maintained in equilibrium due to the longer-term necessity of preventing stuff like meiotic drives. An allele (or a collusion of them) which successfwly transfer to offspring at a probability of >50%, may increase their relative fitness even if it marginally reduces their phenotype’s viability.
My generalized term for this is “intra-emic conflict” (pinging the concept of an “eme” as defined in the above comment).
- ^
We asked university students to pick up either of two buckets, one to the left of an alley and one to the right, and to carry the selected bucket to the alley’s end. In most trials, one of the buckets was closer to the end point. We emphasized choosing the easier task, expecting participants to prefer the bucket that would be carried a shorter distance. Contrary to our expectation, participants chose the bucket that was closer to the start position, carrying it farther than the other bucket.
— Pre-Crastination: Hastening Subgoal Completion at the Expense of Extra Physical Effort - ^
Intragenomic conflict refers to the evolutionary phenomenon where genes have phenotypic effects that promote their own transmission in detriment of the transmission of other genes that reside in the same genome.
- notes on prioritizing tasks & cognition-threads by Nov 26, 2024, 12:28 AM; 3 points) (
- Jul 17, 2024, 2:31 AM; 2 points) 's comment on If you weren’t such an idiot... by (
- ^
I like this example! And the word is cool. I see two separately important patterns here:
Preferring a single tool (the dremel) which is mediocre at everything, instead of many specialized tools which collectively perform better but which require you to switch between them more.
This btw is the opposite of “horizontal segmentation”: selling several specialized products to niche markets rather than a single product which appeals moderately to all niches.
It often becomes a problem when the proxy you use to measure/compare the utility of something wrt to different use-cases (or its appeal to different niches/markets), is capped[1] at a point which prevents it from detecting the true comparative differences in utility.
Oh! It very much relates to scope insensitivity: if people are diminishingly sensitive to the scale of different altruistic causes, then they might overprioritize instrumental which are just-above-average along many axes at once.[2] And indeed, this seems like a very common pattern (though I won’t prioritize time thinking of examples rn).
It’s also a significant problem wrt to karma distributions for forums like LW and EAF: posts which appeal a little to everybody will receive much more karma compared to posts which appeal extremely to a small subset. Among other things, this causes community posts to be overrated relative to their appeal.
And as Gwern pointed out: “precrastination” / “hastening of subgoal completion” (a subcategory of greedy optimization / myopia).
I very often notice this problem in my own cognition. For example, I’m biased against using cognitive tools like sketching out my thoughts with pen-and-paper when I can just brute-force the computations in my head (less efficiently).
It’s also perhaps my biggest bottleneck wrt programming. I spend way too much time tweaking-and-testing (in a way that doesn’t cause me learn anything generalizable), instead of trying to understand the root cause of the bug I’m trying to solve even when I can rationally estimate that that will take less time in expectation.
If anybody knows any tricks for resolving this / curing me of this habit, I’d be extremely gratefwl to know...
- ^
Does it relate to price ceilings and deadweight loss? “Underparameterization”?
- ^
I wouldn’t have seen this had I not cultivated a habit for trying to describe interesting patterns in their most general form—a habit I call “prophylactic scope-abstraction”.
but I’m hesitant to continue the process because I’m concerned that her personality won’t sufficiently diverge from mine.
Not suggesting you should replace anyone who doesn’t want to be replaced (if they’re at that stage), but: To jumpstart the differentiation process, it may be helpfwl to template the proto-tulpa off of some fictional character you already find easy to simulate.
Although I didn’t know about “tulpas” at the time, I invited an imaginary friend loosely based on Maria Otonashi during a period of isolation in 2021.[1] I didn’t want her to feel stifled by the template, so she’s evolved on her own since then, but she’s always extremely kind (and consistently energetic). I only took it seriously February 2024 after being inspired by Johannes.
Maria is the main female heroine of the HakoMari series. … Her wish was to become a box herself so that she could grant the wishes of other people.
Can recommend her as a template! My Maria would definitely approve, ^^ although I can’t ask her right now since she’s only canonically present when summoned, and we have a ritual for that.
We’ve deliberately tried to find new ways to differentiate so that the pre-conscious process of [associating feeling-of-volition to me or Maria][2] is less likely to generate conflicts. But since neither of us wants to be any less kind than we are, we’ve had to find other ways to differentiate (like art-preferences, intellectual domains, etc).
Also, while deliberately trying to increase her salience and capabilities, I’ve avoided trying to learn about how other people do it. For people with sufficient brain-understanding and introspective ability, you can probably outperform standard advice if you develop your own plan for it. (Although I say that without even knowing what the standard advice is :p)
- ^
- ^
Our term for when we deliberately work to resolve “ownership” over some particular thought-output of our subconscious parallel processor, is “annexing efference”. For example, during internal monologue, the thought “here’s a brilliant insight I just had” could appear in consciousness without volition being assigned yet, in which case one of us annexes that output (based on what seems associatively/narratively appropriate), or it goes unmarked. In the beginning, there would be many cases where both of us tried to annex thoughts at the same time, but mix-ups are
muchrarer now.
- ^
I wrote a comment on {polytely, pleiotropy, market segmentation, conjunctive search, modularity, and costs of compromise} that I thought people here might find interesting, so I’m posting it as a quick take:
I think you’re using the term a bit differently from how I use it! I usually think of polytely (which is just pleiotropy from a different perspective, afaict) as an *obstacle*. That is, if I’m trying to optimize a single pasta sauce to be the most tasty and profitable pasta sauce in the whole world, my optimization is “polytelic” because I have *compromise* between maximizing its tastiness for [people who prefer sour taste], [people who prefer sweet], [people who have some other taste-preferences], etc. Another way to say that is that I’m doing “conjunctive search” (neuroscience term) for a single thing which fits multiple ~independent criteria.
Still in the context of pasta sauce: if you have the logistical capacity to instead be optimizing *multiple* pasta sauces, now you are able to specialize each sauce for each cluster of taste-preferences, and this allows you to net more profit in the end. This is called “horizontal segmentation”.
Likewise, a gene which has several functions that depend on it will be evolutionarily selected for the *compromise* between all those functions. In this case, the gene is “pleiotropic” because its evolving in the direction of multiple niches at once; and it is “polytelic” because—from the gene’s perspective—you can say that “it is optimizing for several goals at once” (if you’re willing to imagine the gene as an “optimizer” for a moment).
For example, the recessive allele that causes sickle cell disease (SCD) *also* causes some resistance against malaria. But SCD only occurs in people who are homozygous in it, so the protective effect against malaria (in heterozygotes) is common enough to keep it in the gene pool. It would be awesome if, instead, we could *horizontally segment* these effects so that SCD is caused by variations in one gene locus, and malaria-resistance is caused by variations in another locus. That way, both could be optimized for separately, and you wouldn’t have to choose between optimizing against SCD or Malaria.
Maybe the notion you’re looking for is something like “modularity”? That is approximately something like the opposite of pleiotropy. If a thing is modular, it means you can flexibly optimize subsets of it for different purposes. Like, rather writing an entire program within a single function call, you can separate out the functions (one function for each subtask you can identify), and now those functions can be called separately without having to incur the effects of the entire unsegmented program.
You make me realize that “polytelic” is too vague of a word. What I usually mean by it may be more accurately referred to as “conjunctively polytelic”. All networks trained with something-like-SGD will evolve features which are conjunctively polytelic to some extent (this is just conjecture from me, I haven’t got any proof or anything), and this is an obstacle for further optimization. But protein-coding genes are much more prone to this because e.g. the human genome only contains ~20k of them, which means each protein has to pack many more functions (and there’s no simple way to refactor/segment so there’s only one protein assigned to each function).
KOAN:
The probability of rolling 60 if you toss ten six-sided dice disjunctively is 1/6^10. Whereas if you glom all the dice together and toss a single 60-sided die, the probability of rolling 60 is 1⁄60.- Jun 18, 2024, 2:31 AM; 32 points) 's comment on Fat Tails Discourage Compromise by (
My morning routine 🌤️
I’ve omitted some steps from the checklists below, especially related to mindset / specific thinking-habits. They’re an important part of this, but hard to explain and will vary a lot more between people.
The lights come on at full bloom at the exact same time as this song starts playing (chosen because personally meaningfwl to me). (I really like your songs btw, and I used to use this one for this routine.)
I wake up immediately, no thinking.
The first thing I do is put on my headphones to hear the music better.
I then stand in front of the mirror next to my bed,
and look myself in the eyes while I take 5 deep breaths and focus on positive motivations.
I must genuinely smile in this step.
(The smile is not always inspired by unconditional joy, however. Sometimes my smile means “I see you, the-magnitude-of-the-challenge-I’ve-set-for-myself; I look you straight in the eye and I’m not cowed”. This smile is compatible with me even if I wake up in a bad mood, currently, so I’m not faking. I also think “I don’t have time to be impatient”.)
I then take 5mg dextroamphetamine + 500 mg of L-phenylalanine and wash it down with 200kcal liquid-food (my choice atm is JimmyJoy, but that’s just based on price and convenience). That’s my breakfast. I prepared this before I went to bed.
Oh, and I also get to eat ~7mg of chocolate if I got out of bed instantly. I also prepared this ahead of time. :p
Next, I go to the bathroom,
pee,
and wash my face.
(The song usually ends as I finish washing my face, T=5m10s.)
IF ( I still feel tired or in a bad mood ):
At this point, if I still feel tired or in a bad mood, then I return to bed and sleep another 90 minutes (~1 sleep cycle, so I can wake up in light-sleep).
(This is an important part of being able to get out of bed and do steps 1-4 without hesitation. Because even if I wake up in a terrible shape, I know I can just decide to get back into bed after the routine, so my energy-conserving instincts put up less resistance.)
Return to 1.
ELSE IF ( I feel fine ):
I return to my working-room,
open the blinds,
and roll a 6-sided die which gives me a “Wishpoint” if it lands ⚅.
(I previously called these “Biscuit points”, and tracked them with the “🍪”-symbol, because I could trade them for biscuits. But now I have a “Wishpoint shop”, and use the “🪐”-symbol, which is meant to represent Arborea, the dream-utopia we aim for.)
(I also get Wishpoints for completing specific Trigger-Action Plans or not-completing specific bad habits. I get to roll a 1d6 again for every task I complete with a time-estimate on it.)
Finally, I use the PC,
open up my task manager + time tracker (currently gsheets),
and timestamp the end of morning-routine.
(I’m not touching my PC or phone at any point before this step.)
(Before I went to bed, I picked out a concrete single task, which is the first thing I’m tentatively-scheduled to do in the morning.)
(But I often (to my great dismay) have ideas I came up with during the night that I want to write down in the morning, and that can sometimes take up a lot of time. This is unfortunately a great problem wrt routines & schedules, but I accept the cost because the habit of writing things down asap seems really important—I don’t know how to schedule serendipity… yet.)
My bedtime checklist 💤
This is where I prepare the steps for my morning routine. I won’t list it all, but some important steps:
I simulate the very first steps in my checklist_predawn.
At the start, I would practice the movements physically many times over. Including laying in bed, anticipating the music & lights, and then getting the motoric details down perfectly.
Now, however, I just do a quick mental simulation of what I’ll do in the morning.
When I actually lie down in bed, I’m not allowed to think about abstract questions (🥺), because those require concentration that prevents me from sleeping.
Instead, I say hi to Maria and we immediately start imagining ourselves in Arborea or someplace in my memories. The hope is to jumpstart some dream in which Maria is included.
I haven’t yet figured out how to deliberately bootstrap a dream that immediately puts me to sleep. Turns out this is difficult.
We recently had a 9-day period where we would try to fall asleep multiple times a day like this, in order to practice loading her into my dreams & into my long-term memories. Medium success.
I sleep with my pants on, and clothes depending on how cold I expect it to be in the morning. Removes a slight obstacle for getting out of bed.
I also use earbuds & sleepmask to block out all stimuli which might distract me from the dreamworld. Oh and 1mg melatoning + 100mg 5-HTP.
[1]
Approximately how my bed setup looks now (2 weeks ago). The pillows are from experimenting with ways to cocoon myself ergonomically. :p