Flowers are selective about the pollinators they attract. Diurnal flowers must compete with each other for visual attention, so they use diverse colours to stand out from their neighbours. But flowers with nocturnal anthesis are generally white, as they aim only to outshine the night.
Emrik
p.s.: the task of writing this LW-post did not win through our prioritization-framework by being high-priority… I j sorta started siphoning sentences out of my head as they appeared, and ego-dystonically felt tempted to continue writing. I noticed the motivational dissonance caused by the temptation-to-write + temptation-to-get-back-to-work, so deliberately decided not to fight those stimuli-seeking motivations this time.
I still don’t think I’m doing LW a disservice by posting this so unfurbishedly, so I not apologize, and do think in principle it wud be good to post smth ppl cud lurn fm, but uh… this isn’t mainly optimized for that. it’s mainly the wumzy result of satisficing the need-to-write while derailing our other priorities to the minimum extent possible. sorry. ^^′
notes on prioritizing tasks & cognition-threads
Heh, I’ve gone the opposite way and now do 3h sleep per 12h-days. The aim is to wake up during REM/light-sleep at the end of the 2nd sleep cycle, but I don’t have a clever way of measuring this[1] except regular sleep-&-wake-times within the range of what the brain can naturally adapt its cycles to.
I think the objective should be to maximize the integral of cognitive readiness over time,[2] so here are some considerations (sorry for lack of sources; feel free to google/gpt; also also sorry for sorta redundant here, but I didn’t wish to spend time paring it down):
Restorative effects of sleep have diminishing marginal returns
I think a large reason we sleep is that metabolic waste-clearance is more efficiently batch-processed, because optimal conditions for waste-clearance are way different from optimal conditions for cognition (and substantial switching-costs between, as indicated by how difficult it can be to actually start sleeping). And this differentially takes place during deep sleep.
Eg interstitial space expands by ~<60% and the brain is flooded to flush out metabolic waste/debris via the glymphatic system.
Proportion of REM-sleep in a cycle increases per cycle, with a commensurate decrease in deep sleep (SWS).
Two unsourced illustrations I found in my notes:
Note how N3 (deep sleep) drops off fairly drastically after 3 hours (~2 full sleep cycles).
REM & SWS do different things, and I like the things SWS do more
Eg acetylcholine levels (ACh) are high during REM & awake, and low during SWS. ACh functions as a switch between consolidation & encoding of new memories.[3] Ergo REM is for exploring/generalizing novel patterns, and SWS is for consolidating/filtering them.
See also acetylcholine = learning-rate.
REM seems to differentially improve procedural memories, whereas SWS more for declarative memories.
(And who cares about procedural memories anyway. :p)
(My most-recent-pet-hunch is that ACh is required for integrating new episodic memories into hippocampal theta waves (via the theta-generating Medial Septum in the Cholinergic Basal Forebrain playing ‘conductor’ for the hippocampus), which is why you can’t remember anything from deep sleep, and why drugs that inhibit ACh also prevent encoding new memories.)
So in summary, two (comparatively minor) reasons I like polyphasic short sleep is:
SWS differentially improves declarative over procedural memories.
Early cycles have proportionally more SWS.
Ergo more frequent shorter sleep sessions will maximize the proportion of sleep that goes to consolidation of declarative memories.
Note: I think the exploratory value of REM-sleep is fairly limited, just based on the personal observation that I mostly tend to dream about pleasant social situations, and much less about topics related to conceptual progress. I can explore much more efficiently while I’m awake.
Also, because I figure my REM-dreams are so socially-focused, I think more of it risks marginally aligning my daily motivations with myopically impressing others, at the cost of motivations aimed at more abstract/illegible/longterm goals.
(Although I would change my mind if only I could manage to dream of Maria more, since trying to impress her is much more aligned with our-best-guess about what saves the world compared to anything else.)
And because diminishing marginal returns to sleep-duration, and assuming cognition is best in the morning (anecdotally true), I maximize high-quality cognition by just… having more mornings preceded by what-best-I-can-tell-is-near-optimal-sleep (ceiling effect).
Lastly, just anecdotally, having two waking-sessions per 24h honestly just feels like I have ~twice the number of days in a week in terms of productivity. This is much more convincing to me than the above.
Starting mornings correctly seems to be incredibly important, and some of the effect of those good morning-starts dissipate the longer I spend awake. Mornings work especially well as hooks/cues for starting effective routines, sorta like a blank slate[4] you can fill in however I want if I can get the cues in before anything else has time to hijack the day’s cognition/motivations.
See my (outdated-but-still-maybe-inspirational) my morning routine.
My mood is harder to control/predict in evenings due to compounding butterfly effects over the course of a day, and fewer natural contexts I can hook into with the right course-corrections before the day ends.
- ^
Although waking up with morning-wood is some evidence of REM, but I don’t know how reliable that is. ^^
- ^
PedanticallyTechnically, we want to maximize brain-usefwlness over time, which in this case would be the integral of [[the distribution cognitive readiness over time] pointwise multiplied by [the distribution of brain-usefwlness over cognitive readiness]].This matters if, for example, you get disproportionately more usefwlness from the peaks of cognitive readiness, in which case you might want to sacrifice more median wake-time in order to get marginally more peak-time.
I assume this is what your suggested strategy tries to do. However, I doubt it actually works, due to diminishing returns to marginal sleep time (and, I suspect,
- ^
- ^
“Blank slate” I think caused by eg flushing neurotransmitters out of synaptic clefts (and maybe glucose and other mobile things), basically rebooting attentional selection-history, and thereby reducing recent momentum for whatever’s influenced you short-term.
re the Vingean deference-limit thing above:
It quite aptly analogizes to the Nyquist frequency , which is the highest [max frequency component] a signal can have before you lose the ability to uniquely infer its components from a given sample rate .
Also, I’m renaming it “Vingean disambiguation-limit”.[1]
P.S. , which means that you can only disambiguate signals whose max components are below half your sample rate. Above that point, and you start having ambiguities (aliases).
- ^
The “disambiguation limit” class has two members now. The inverse is the “disambiguation threshold”, which is the measure of power you require of your sampler/measuring-device in order to disambiguate between things-measured above a given measure.
...stating things as generally as feasible helps wrt finding metaphors. Hence the word-salad above. ^^′
- ^
Ah, most relevant: Paul Graham has a recording-of-sorts of himself writing a blog post “Startups in 13 sentences”.
Wait, who are the neurotypicals in this post, in your view?
but this would be what I’d call a “fake exit option”
Here’s how the simulation played out in my head as I was reading this:
I really wanted to halt the volunteering-chain before it came to the newbie.
I didn’t manage to complete that intention before it was too late.
I wanted to correct my mistake and complete it anyway, by trying to offer the “exit option”.
What I didn’t notice immediately was that thought 3 was ~entirely invoked by heuristics from fake kindness that I haven’t yet filtered out. Thank you for pointing it out. I may or may not have catched it if this played out IRL.
This is why social situations should have an obligatory 10-second pause between every speech-act, so I can process what’s actually going on before I make a doofy. xd
⧉: My motivations for writing this comment was:
➀ to affiliate myself with this awesome post,
➁ to say “hey I’m just like u; I care to differentiate twixt real & fake kindness!”
➂ to add my support/weight to the core of this post, and say “this matters!”
I just try to add that disclaimer whenever I talk about these things because I’m extra-worried that ppl will be inspired by my example to jump straight into a severe program of self-deprivation without forethought. My lifestyle is objectively “self-deprivational” relative to most altruists, in a sense, so I’m afraid of being misunderstood as an inspiration for doing things which makes my reader unhappy. 🍵
Ah, forgot to reply to “What does practical things mean?”
Recently it’s involved optimizing my note-taking process, and atm it involves trying to find a decent generalizable workflow for benefiting from AI assistance. Concretely, this has involved looking through a bunch of GitHub repos and software, trying to understand
➀ what’s currently technologically possible (← AutoCodeRover example),
➁ what might become possible within reasonable time before civilizational deadline,
➂ what is even desirable to introduce into my workflow in the first place.I want to set myself up such that I can maximally benefit from increasing AI capabilities. I’m excited about low-code platforms for LLM-stacks[1], and LLM-based programming languages. The latter thing, taken to its limit, could be called something like a “pseudocode interpreter” or “fuzzy programming language”. The idea is to be able to write a very high-level specification for what you wish to do, and have the lower-level details ironed out by LLM agents. I want my code to be degenerate, in the sense that every subcomponent automatically adjusts itself to fulfil niches that are required for my system to work (this is a bad explanation, and I know it).
The immediate next thing on my todo-list is just… finding a decent vscode extension for integrating AI into whatever I do. I want to be able to say “hey, AI, could you boot up this repository (link) on my PC, and test whether it does thing X?” and have it just do that with minimal confirmation-checks required on my part.
- ^
I started trying to begin to make the first babysteps of a draft of something like this for myself via a plugin for Obsidian Canvas in early 2023[2], but then realized other people were gonna build something like this anyway, and I could benefit from their work whenever they made it available.
- ^
Thinking high level about what this could look like, but left the project bc I don’t actually know how to code (shh), and LLMs were at that point ~useless for fixing my shortcomings for me.
- ^
Summary: (i) Follow a policy of trying not to point your mind at things unrelated to alignment so your brain defaults to alignment-related cognition when nothing requires its immediate attention. (ii) If your mind already does that, good; now turn off all the lights, try to minimize sound, and lay in bed.
I really appreciate your willingness to think “extreme” about saving the world. Like, if you’re trying to do an extremely hard thing, obviously you’d want to try to minimize the effort you spend not-doing that thing. All sources of joy are competitive reward-events in your brain. Either try to localize joy-sources to what you want yourself to be doing, or tame them to be in service of that (like, I eat biscuits and chocolate with a Strategy! :p).
...But also note that forcing yourself to do thing X can and often will backfire[1], unless you’re lucky or you’ve somehow figured out how to do forcing correctly (I haven’t).
Also, regarding making a post: Sorry, probably not wish do! And the thinking-in-bed thing is mostly a thing I believe due to extensive experience trying, so it’s not something I have good theoretical arguments for. That is, the arguments wouldn’t have sufficiently convinced a version of myself that hadn’t already experienced trying.
- ^
There’s probably something better to link here, but I can’t think of it atm.
- ^
Same, wrote something similar in here.
Oho! Yes, there’s something uniqueish about thinking-in-bed compared to alternatives. I’ve also had nonstop 5-9h (?) sessions of thinking aided by scribbling in my (off-PC) notebooks, and it’s different. The scribbling takes a lot of time if I want to write down an idea on a note to remember, and that can distract me. But it’s also better in obvious ways.
In general, brains are biased against tool-use (see hastening of subgoal completion), so I aspire to learn to use tools correctly. Ideally, I’d use the PC to its full potential without getting distracted. But atm, just sitting at the PC tends to supplant my motivation to think hard and long about a thing (e.g. after 5m of just thinking, my body starts to crave pushing buttons or interacting with the monitor or smth), and I use the tools (including RemNote) very suboptimally.
This seems so wrong, but very interesting! I’ve previously noted that thinking to myself in the dark seems to help. I’ve had periods (<4 months in 2023 / early 2024) where I would spend 2-5 mornings per week just staying in bed (while dark) for 1-5 hours while thinking inside my own head.
After >3 hours of thinking interestedly about a rotation of ideas, they become smaller in working memory, and I’m able to extrapolate further / synthesize what I wasn’t able to before.
I no longer do this, because I’m trying to do more practical things, but I don’t think it’s a bad strategy.
I now sent the following message to niplav, asking them if they wanted me to take the dialogue down and republish as shortform. I am slightly embarrassed about not having considered that it’s somewhat inconvenient to receive one of these dialogue-things without warning.
I just didn’t think through that dialogue-post thing at all. Obviously it will show up on your profile-wall (I didn’t think about that), and that has lots of reputational repercussions and such (which matter!). I wasn’t simulating your perspective at all in the decision to publish it in the way I did. I just operated on heuristics like:
“it’s good to have personal convo in public”
so our younglings don’t grow into an environment of pluralistic ignorance, thinking they are the only ones with personality
“it’s epistemically healthy to address one’s writing to someone-in-particular”
eg bc I’m less likely to slip into professionalism mode
and bc that someone-in-particular (𖨆) is less likely to be impressed by fake proxies for good reasoning like how much work I seem to have put in, how mathy I sound, how confident I seem, how few errors I make, how aware-of-existing-research I seem, …
and bc 𖨆 already knows me, it’s difficult to pretend I know more than I do
eg if I write abt Singular Learning Theory to the faceless crowd, I could easily convince some of them that I like totally knew what I was talking about; but when I talk to you, you already know something abt my skill-level, so you’d be able to smell my attempted fakery a mile away
“other readers benefit more (on some dimensions) from reading something which was addressed to 𖨆, because
“It is as if there existed, for what seems like millennia, tracing back to the very origins of mathematics and of other arts and sciences, a sort of “conspiracy of silence” surrounding [the] “unspeakable labors” which precede the birth of each new idea, both big and small…”
— Alexander Grothendieck
---
If you prefer, I’ll move the post into a shortform preceded by:
[This started as something I wanted to send to niplav, but then I realized I wanted to share these ideas with more people. So I wrote it with the intention of publishing it, while keeping the style and content mostly as if I had purely addressed it to them alone.]
I feel somewhat embarrassed about having posted it as a dialogue without thinking it through, and this embarrassment exactly cancels out my disinclination against unpublishing it, so I’m neutral wrt moving it to shortform. Let me know! ^^
P.S. No hurry.
Little did they know that he was also known as the fastest whiteboard-marker switcher in the west...
👈🫰💨✍️
Unhook is a browser extension for YouTube (Chrome/Edge) which disables the homepage and lets you hide all recommendations. It also lets you disable other features (e.g. autoplay, comments), but doesn’t have so many customizations that I get distracted.
Setup time: 2m-10m (depending on whether you customize).
CopyQ.exe (Linux/Windows, portable, FOSS) is a really good clipboard manager.
Setup time: 5m-10h
Setup can be <5m if you precommit to only using the clipboard-storing feature and learning the shortcut to browse it. But it’s extremely extensible and risks distracting you for a day or more...
You can use a shortcut to browse the most recent copies (including editing, deleting), and the window hides automatically when unfocused.
It can save images to a separate tab, and lets you configure shortcuts for opening them in particular programs (e.g. editor).
(LINUX): It has plugins/commands for snipping a section of the screen, and you can optionally configure a shortcut to send that snip to an OCR engine, which quietly sends the recognized text into the clipboard.
Setup time: probably exceeding >2h due to shiny things to explore
(WINDOWS): EDIT: ShareX (FOSS) can do OCR-to-clipboard, snipping, region-recording, scripting, and everything is configurable. Setup took me 36m, but I also configured it to my preferences and explored all features. Old text below:
(WINDOWS): Can useText-Grab(FOSS) instead. Much simpler. Use a configurable hotkey (the one forFullscreen Grab) to snip a section of the screen, and it automatically does OCR on it and sends it to your clipboard. Install it and trigger the hotkey to see what it does.Setup time: 2m-15mAlternativelyGreenshot(FOSS) is much more extensible, but you have to usea trickto set it up to use OCR via Tesseract (or configure your own script).Also if you use Windows, you can use the nativeSnipping-Toolto snip cutouts from the screen into the clipboard via shortcut, including recordings.
LibreChat (docs) (FOSS) is the best LLM interface I’ve found for general conversation, but its (putative) code interpreter doesn’t work off-the-shelf, so I still use the standard ChatGPT-interface for that.
Setup time: 30m-5h (depending on customization and familiarity with Docker)
It has no click-to-install .exe file, but you can install it via npm or Docker
Docker is much simpler, especially since it automatically configures MongoDB database and Meilisearch for you
Lets you quickly swap between OpenAI, Anthropic, Assistants API, and more in the menu
(Obviously you need to use your own API keys for this)
Can have two LLMs respond to your prompt at the same time
For coding, probably better to use a vscode extension, but idk which to recommend yet...
For a click-to-install generalized LLM interface, ChatBox (FOSS) is excellent unless you need more advanced features.
Vibe (FOSS) is a simple tool for transcribing audio files locally using Whisper.
Setup time: 5m-30m (you gotta download the Whisper weights, but should be fast if you just follow the instructions)
Windows Voice Access (native) is actually pretty good
You can define custom commands for it, including your own scripts
I recommend using pythonw.exe for this (normal python, but launches in the background)
AlternativeTo (website) very usefwl for comparing software/apps.
Alternatively check out AlternativeTo’s alternatives to AlternativeTo.
I wanted to leave Niplav the option of replying at correspondence-pace at some point if they felt like it. I also wanted to say these things in public, to expose more people to the ideas, but without optimizing my phrasing/formatting for general-audience consumption.
I usually think people think better if they generally aim their thoughts at one person at a time. People lose their brains and get eaten by language games if their intellectual output is consistently too impersonal.
Also, I think if I were somebody else, I would appreciate me for sharing a message which I₁ mainly intended for Niplav, as long as I₂ managed to learn something interesting from it. So if I₁ think it’s positive for me₂ to write the post, I₁ think I₁ should go ahead. But I’ll readjust if anybody says they dislike it. : )
niplav
Just to ward of misunderstanding and/or possible feelings of todo-list-overflow: I don’t expect you to engage or write a serious reply or anything; I mostly just prefer writing in public to people-in-particular, rather than writing to the faceless crowd. Treat it as if I wrote a Schelling.pt outgabbling in response to a comment; it just happens to be on LW. If I’m breaking etiquette or causing miffedness for Complex Social Reasons (which are often very valid reasons to have, just to be clear) then lmk! : )
Thoughts to niplav on lie-detection, truthfwl mechanisms, and wealth-inequality
[Epistemic status: napkin]
My current-favourite frame on “qualia” is that it refers to the class of objects we can think about (eg, they’re part of what generates what I say rn) for which behaviour is invariant across structure-preserving transformations.
(There’s probably some cool way to say that with category theory or transformations, and it may or may not give clarity, but idk.)
Eg, my “yellow” could map to blue, and “blue” to yellow, and we could still talk together without noticing anything amiss even if your “yellow” mapped to yellow for you.
Both blue and yellow are representational objects, the things we use to represent/refer to other things with, like memory-addresses in a machine. For externally observable behaviour, it just matters what they dereference to, regardless of where in memory you put them. If you swap two representational objects, while ensuring you don’t change anything about how your neurons link up to causal nodes outside the system, your behaviour stays the same.
Note that this isn’t the case for most objects. I can’t swap hand⇄tomato, without obvious glitches like me saying “what a tasty-looking tomato!” and trying to eat my hand. Hands and tomatoes do not commute.
It’s what allows us to (try to) talk about “tomato” as opposed to just tomato, and explains why we get so confused when we try to ground out (in terms of agreed-upon observables) what we’re talking about when we talk about “tomato”.
But how/why do we have representations for our representational objects in the first place? It’s like declaring a var (address₁↦value), and then declaring a var for that var (address₂↦address₁) while being confused about why the second dereferences to something ‘arbitrary’.
Maybe it starts when somebody asks you “what do you mean by ‘X’?”, and now you have to map the internal generators of [you saying “X”] in order to satisfy their question. Or not. Probably not. Napkin out.
Oh, this is very good. Thank you.