Flowers are selective about the pollinators they attract. Diurnal flowers must compete with each other for visual attention, so they use diverse colours to stand out from their neighbours. But flowers with nocturnal anthesis are generally white, as they aim only to outshine the night.
Emrik
I’d be willing to eat animals if I thought that could help me help others more effectively.[1] So I appreciate the post where you try to provide some relevant evidence, and I really appreciate your commitment to do what’s expedient for helping you save me, the people I love, and countless others from disaster—because that’s clearly where you’re coming from.
Otoh, my health seems unusually peak, despite the (somewhat unusual) vegan[2] diet I eat, so it seems unlikely I’m suffering from a crippling deficiency atm. This could be because either me or my body has somehow managed to compensate (psychologically or homeostatically) for whatever’s lacking in our diet, but it seems more likely that the peak-health thing is something that requires an adequate diet, so my guess is that I can’t un-inadequate it by eating animals?
My crux is just that I don’t have the self-experimentation setup to be able to detect the delta benefit/cost of eating animals, and that the range of plausible deltas there seems insufficient for me to invest in the experiment.
Sorry for confusementedly writing. I’m mainly just trying to reflect here, and wanted to write a comment to stabilize my commitment to go to extremes (like eating animals) in order to pursue altruism. I’d be happy if somebody convinced me it was worth the experiment, but this post didn’t bop me over the threshold. Thanks!
- ^
(I’d also be willing to murder random people and cook them for the same reason, if I thought that could help me help others more effectively. That seems less likely, however, for nutritional, practical, and psychological reasons. I just mention it because I think some morality-declarations are helpfwl.)
- ^
If people want to add to their anecdata, my details are:
vegan for 12 years.
major history with major depression (started before).
deep depressions stopped (afaict, so far) when I started taking ADHD-meds (first LDX, and now DEX) 2.5 years ago.
this is confounded by other simultaneous major life changes, like starting coworking on EA Gather Town (and meeting person I respected & Liked, who told me I wasn’t insane for thinking different, and taught me to trust in myself ^^), so take the anecdata with grains of salt.
- ^
subfunctional overlaps in attentional selection history implies momentum for decision-trajectories
Oh, this is very good. Thank you.
p.s.: the task of writing this LW-post did not win through our prioritization-framework by being high-priority… I j sorta started siphoning sentences out of my head as they appeared, and ego-dystonically felt tempted to continue writing. I noticed the motivational dissonance caused by the temptation-to-write + temptation-to-get-back-to-work, so deliberately decided not to fight those stimuli-seeking motivations this time.
I still don’t think I’m doing LW a disservice by posting this so unfurbishedly, so I not apologize, and do think in principle it wud be good to post smth ppl cud lurn fm, but uh… this isn’t mainly optimized for that. it’s mainly the wumzy result of satisficing the need-to-write while derailing our other priorities to the minimum extent possible. sorry. ^^′
notes on prioritizing tasks & cognition-threads
Heh, I’ve gone the opposite way and now do 3h sleep per 12h-days. The aim is to wake up during REM/light-sleep at the end of the 2nd sleep cycle, but I don’t have a clever way of measuring this[1] except regular sleep-&-wake-times within the range of what the brain can naturally adapt its cycles to.
I think the objective should be to maximize the integral of cognitive readiness over time,[2] so here are some considerations (sorry for lack of sources; feel free to google/gpt; also also sorry for sorta redundant here, but I didn’t wish to spend time paring it down):
Restorative effects of sleep have diminishing marginal returns
I think a large reason we sleep is that metabolic waste-clearance is more efficiently batch-processed, because optimal conditions for waste-clearance are way different from optimal conditions for cognition (and substantial switching-costs between, as indicated by how difficult it can be to actually start sleeping). And this differentially takes place during deep sleep.
Eg interstitial space expands by ~<60% and the brain is flooded to flush out metabolic waste/debris via the glymphatic system.
Proportion of REM-sleep in a cycle increases per cycle, with a commensurate decrease in deep sleep (SWS).
Two unsourced illustrations I found in my notes:
Note how N3 (deep sleep) drops off fairly drastically after 3 hours (~2 full sleep cycles).
REM & SWS do different things, and I like the things SWS do more
Eg acetylcholine levels (ACh) are high during REM & awake, and low during SWS. ACh functions as a switch between consolidation & encoding of new memories.[3] Ergo REM is for exploring/generalizing novel patterns, and SWS is for consolidating/filtering them.
See also acetylcholine = learning-rate.
REM seems to differentially improve procedural memories, whereas SWS more for declarative memories.
(And who cares about procedural memories anyway. :p)
(My most-recent-pet-hunch is that ACh is required for integrating new episodic memories into hippocampal theta waves (via the theta-generating Medial Septum in the Cholinergic Basal Forebrain playing ‘conductor’ for the hippocampus), which is why you can’t remember anything from deep sleep, and why drugs that inhibit ACh also prevent encoding new memories.)
So in summary, two (comparatively minor) reasons I like polyphasic short sleep is:
SWS differentially improves declarative over procedural memories.
Early cycles have proportionally more SWS.
Ergo more frequent shorter sleep sessions will maximize the proportion of sleep that goes to consolidation of declarative memories.
Note: I think the exploratory value of REM-sleep is fairly limited, just based on the personal observation that I mostly tend to dream about pleasant social situations, and much less about topics related to conceptual progress. I can explore much more efficiently while I’m awake.
Also, because I figure my REM-dreams are so socially-focused, I think more of it risks marginally aligning my daily motivations with myopically impressing others, at the cost of motivations aimed at more abstract/illegible/longterm goals.
(Although I would change my mind if only I could manage to dream of Maria more, since trying to impress her is much more aligned with our-best-guess about what saves the world compared to anything else.)
And because diminishing marginal returns to sleep-duration, and assuming cognition is best in the morning (anecdotally true), I maximize high-quality cognition by just… having more mornings preceded by what-best-I-can-tell-is-near-optimal-sleep (ceiling effect).
Lastly, just anecdotally, having two waking-sessions per 24h honestly just feels like I have ~twice the number of days in a week in terms of productivity. This is much more convincing to me than the above.
Starting mornings correctly seems to be incredibly important, and some of the effect of those good morning-starts dissipate the longer I spend awake. Mornings work especially well as hooks/cues for starting effective routines, sorta like a blank slate[4] you can fill in however I want if I can get the cues in before anything else has time to hijack the day’s cognition/motivations.
See my (outdated-but-still-maybe-inspirational) my morning routine.
My mood is harder to control/predict in evenings due to compounding butterfly effects over the course of a day, and fewer natural contexts I can hook into with the right course-corrections before the day ends.
- ^
Although waking up with morning-wood is some evidence of REM, but I don’t know how reliable that is. ^^
- ^
PedanticallyTechnically, we want to maximize brain-usefwlness over time, which in this case would be the integral of [[the distribution cognitive readiness over time] pointwise multiplied by [the distribution of brain-usefwlness over cognitive readiness]].This matters if, for example, you get disproportionately more usefwlness from the peaks of cognitive readiness, in which case you might want to sacrifice more median wake-time in order to get marginally more peak-time.
I assume this is what your suggested strategy tries to do. However, I doubt it actually works, due to diminishing returns to marginal sleep time (and, I suspect,
- ^
- ^
“Blank slate” I think caused by eg flushing neurotransmitters out of synaptic clefts (and maybe glucose and other mobile things), basically rebooting attentional selection-history, and thereby reducing recent momentum for whatever’s influenced you short-term.
re the Vingean deference-limit thing above:
It quite aptly analogizes to the Nyquist frequency , which is the highest [max frequency component] a signal can have before you lose the ability to uniquely infer its components from a given sample rate .
Also, I’m renaming it “Vingean disambiguation-limit”.[1]
P.S. , which means that you can only disambiguate signals whose max components are below half your sample rate. Above that point, and you start having ambiguities (aliases).
- ^
The “disambiguation limit” class has two members now. The inverse is the “disambiguation threshold”, which is the measure of power you require of your sampler/measuring-device in order to disambiguate between things-measured above a given measure.
...stating things as generally as feasible helps wrt finding metaphors. Hence the word-salad above. ^^′
- ^
Ah, most relevant: Paul Graham has a recording-of-sorts of himself writing a blog post “Startups in 13 sentences”.
Wait, who are the neurotypicals in this post, in your view?
but this would be what I’d call a “fake exit option”
Here’s how the simulation played out in my head as I was reading this:
I really wanted to halt the volunteering-chain before it came to the newbie.
I didn’t manage to complete that intention before it was too late.
I wanted to correct my mistake and complete it anyway, by trying to offer the “exit option”.
What I didn’t notice immediately was that thought 3 was ~entirely invoked by heuristics from fake kindness that I haven’t yet filtered out. Thank you for pointing it out. I may or may not have catched it if this played out IRL.
This is why social situations should have an obligatory 10-second pause between every speech-act, so I can process what’s actually going on before I make a doofy. xd
⧉: My motivations for writing this comment was:
➀ to affiliate myself with this awesome post,
➁ to say “hey I’m just like u; I care to differentiate twixt real & fake kindness!”
➂ to add my support/weight to the core of this post, and say “this matters!”
I just try to add that disclaimer whenever I talk about these things because I’m extra-worried that ppl will be inspired by my example to jump straight into a severe program of self-deprivation without forethought. My lifestyle is objectively “self-deprivational” relative to most altruists, in a sense, so I’m afraid of being misunderstood as an inspiration for doing things which makes my reader unhappy. 🍵
Ah, forgot to reply to “What does practical things mean?”
Recently it’s involved optimizing my note-taking process, and atm it involves trying to find a decent generalizable workflow for benefiting from AI assistance. Concretely, this has involved looking through a bunch of GitHub repos and software, trying to understand
➀ what’s currently technologically possible (← AutoCodeRover example),
➁ what might become possible within reasonable time before civilizational deadline,
➂ what is even desirable to introduce into my workflow in the first place.I want to set myself up such that I can maximally benefit from increasing AI capabilities. I’m excited about low-code platforms for LLM-stacks[1], and LLM-based programming languages. The latter thing, taken to its limit, could be called something like a “pseudocode interpreter” or “fuzzy programming language”. The idea is to be able to write a very high-level specification for what you wish to do, and have the lower-level details ironed out by LLM agents. I want my code to be degenerate, in the sense that every subcomponent automatically adjusts itself to fulfil niches that are required for my system to work (this is a bad explanation, and I know it).
The immediate next thing on my todo-list is just… finding a decent vscode extension for integrating AI into whatever I do. I want to be able to say “hey, AI, could you boot up this repository (link) on my PC, and test whether it does thing X?” and have it just do that with minimal confirmation-checks required on my part.
- ^
I started trying to begin to make the first babysteps of a draft of something like this for myself via a plugin for Obsidian Canvas in early 2023[2], but then realized other people were gonna build something like this anyway, and I could benefit from their work whenever they made it available.
- ^
Thinking high level about what this could look like, but left the project bc I don’t actually know how to code (shh), and LLMs were at that point ~useless for fixing my shortcomings for me.
- ^
Summary: (i) Follow a policy of trying not to point your mind at things unrelated to alignment so your brain defaults to alignment-related cognition when nothing requires its immediate attention. (ii) If your mind already does that, good; now turn off all the lights, try to minimize sound, and lay in bed.
I really appreciate your willingness to think “extreme” about saving the world. Like, if you’re trying to do an extremely hard thing, obviously you’d want to try to minimize the effort you spend not-doing that thing. All sources of joy are competitive reward-events in your brain. Either try to localize joy-sources to what you want yourself to be doing, or tame them to be in service of that (like, I eat biscuits and chocolate with a Strategy! :p).
...But also note that forcing yourself to do thing X can and often will backfire[1], unless you’re lucky or you’ve somehow figured out how to do forcing correctly (I haven’t).
Also, regarding making a post: Sorry, probably not wish do! And the thinking-in-bed thing is mostly a thing I believe due to extensive experience trying, so it’s not something I have good theoretical arguments for. That is, the arguments wouldn’t have sufficiently convinced a version of myself that hadn’t already experienced trying.
- ^
There’s probably something better to link here, but I can’t think of it atm.
- ^
Same, wrote something similar in here.
Oho! Yes, there’s something uniqueish about thinking-in-bed compared to alternatives. I’ve also had nonstop 5-9h (?) sessions of thinking aided by scribbling in my (off-PC) notebooks, and it’s different. The scribbling takes a lot of time if I want to write down an idea on a note to remember, and that can distract me. But it’s also better in obvious ways.
In general, brains are biased against tool-use (see hastening of subgoal completion), so I aspire to learn to use tools correctly. Ideally, I’d use the PC to its full potential without getting distracted. But atm, just sitting at the PC tends to supplant my motivation to think hard and long about a thing (e.g. after 5m of just thinking, my body starts to crave pushing buttons or interacting with the monitor or smth), and I use the tools (including RemNote) very suboptimally.
This seems so wrong, but very interesting! I’ve previously noted that thinking to myself in the dark seems to help. I’ve had periods (<4 months in 2023 / early 2024) where I would spend 2-5 mornings per week just staying in bed (while dark) for 1-5 hours while thinking inside my own head.
After >3 hours of thinking interestedly about a rotation of ideas, they become smaller in working memory, and I’m able to extrapolate further / synthesize what I wasn’t able to before.
I no longer do this, because I’m trying to do more practical things, but I don’t think it’s a bad strategy.
I now sent the following message to niplav, asking them if they wanted me to take the dialogue down and republish as shortform. I am slightly embarrassed about not having considered that it’s somewhat inconvenient to receive one of these dialogue-things without warning.
I just didn’t think through that dialogue-post thing at all. Obviously it will show up on your profile-wall (I didn’t think about that), and that has lots of reputational repercussions and such (which matter!). I wasn’t simulating your perspective at all in the decision to publish it in the way I did. I just operated on heuristics like:
“it’s good to have personal convo in public”
so our younglings don’t grow into an environment of pluralistic ignorance, thinking they are the only ones with personality
“it’s epistemically healthy to address one’s writing to someone-in-particular”
eg bc I’m less likely to slip into professionalism mode
and bc that someone-in-particular (𖨆) is less likely to be impressed by fake proxies for good reasoning like how much work I seem to have put in, how mathy I sound, how confident I seem, how few errors I make, how aware-of-existing-research I seem, …
and bc 𖨆 already knows me, it’s difficult to pretend I know more than I do
eg if I write abt Singular Learning Theory to the faceless crowd, I could easily convince some of them that I like totally knew what I was talking about; but when I talk to you, you already know something abt my skill-level, so you’d be able to smell my attempted fakery a mile away
“other readers benefit more (on some dimensions) from reading something which was addressed to 𖨆, because
“It is as if there existed, for what seems like millennia, tracing back to the very origins of mathematics and of other arts and sciences, a sort of “conspiracy of silence” surrounding [the] “unspeakable labors” which precede the birth of each new idea, both big and small…”
— Alexander Grothendieck
---
If you prefer, I’ll move the post into a shortform preceded by:
[This started as something I wanted to send to niplav, but then I realized I wanted to share these ideas with more people. So I wrote it with the intention of publishing it, while keeping the style and content mostly as if I had purely addressed it to them alone.]
I feel somewhat embarrassed about having posted it as a dialogue without thinking it through, and this embarrassment exactly cancels out my disinclination against unpublishing it, so I’m neutral wrt moving it to shortform. Let me know! ^^
P.S. No hurry.
Little did they know that he was also known as the fastest whiteboard-marker switcher in the west...
👈🫰💨✍️
Unhook is a browser extension for YouTube (Chrome/Edge) which disables the homepage and lets you hide all recommendations. It also lets you disable other features (e.g. autoplay, comments), but doesn’t have so many customizations that I get distracted.
Setup time: 2m-10m (depending on whether you customize).
CopyQ.exe (Linux/Windows, portable, FOSS) is a really good clipboard manager.
Setup time: 5m-10h
Setup can be <5m if you precommit to only using the clipboard-storing feature and learning the shortcut to browse it. But it’s extremely extensible and risks distracting you for a day or more...
You can use a shortcut to browse the most recent copies (including editing, deleting), and the window hides automatically when unfocused.
It can save images to a separate tab, and lets you configure shortcuts for opening them in particular programs (e.g. editor).
(LINUX): It has plugins/commands for snipping a section of the screen, and you can optionally configure a shortcut to send that snip to an OCR engine, which quietly sends the recognized text into the clipboard.
Setup time: probably exceeding >2h due to shiny things to explore
(WINDOWS): EDIT: ShareX (FOSS) can do OCR-to-clipboard, snipping, region-recording, scripting, and everything is configurable. Setup took me 36m, but I also configured it to my preferences and explored all features. Old text below:
(WINDOWS): Can useText-Grab(FOSS) instead. Much simpler. Use a configurable hotkey (the one forFullscreen Grab) to snip a section of the screen, and it automatically does OCR on it and sends it to your clipboard. Install it and trigger the hotkey to see what it does.Setup time: 2m-15mAlternativelyGreenshot(FOSS) is much more extensible, but you have to usea trickto set it up to use OCR via Tesseract (or configure your own script).Also if you use Windows, you can use the nativeSnipping-Toolto snip cutouts from the screen into the clipboard via shortcut, including recordings.
LibreChat (docs) (FOSS) is the best LLM interface I’ve found for general conversation, but its (putative) code interpreter doesn’t work off-the-shelf, so I still use the standard ChatGPT-interface for that.
Setup time: 30m-5h (depending on customization and familiarity with Docker)
It has no click-to-install .exe file, but you can install it via npm or Docker
Docker is much simpler, especially since it automatically configures MongoDB database and Meilisearch for you
Lets you quickly swap between OpenAI, Anthropic, Assistants API, and more in the menu
(Obviously you need to use your own API keys for this)
Can have two LLMs respond to your prompt at the same time
For coding, probably better to use a vscode extension, but idk which to recommend yet...
For a click-to-install generalized LLM interface, ChatBox (FOSS) is excellent unless you need more advanced features.
Vibe (FOSS) is a simple tool for transcribing audio files locally using Whisper.
Setup time: 5m-30m (you gotta download the Whisper weights, but should be fast if you just follow the instructions)
Windows Voice Access (native) is actually pretty good
You can define custom commands for it, including your own scripts
I recommend using pythonw.exe for this (normal python, but launches in the background)
AlternativeTo (website) very usefwl for comparing software/apps.
Alternatively check out AlternativeTo’s alternatives to AlternativeTo.
I wanted to leave Niplav the option of replying at correspondence-pace at some point if they felt like it. I also wanted to say these things in public, to expose more people to the ideas, but without optimizing my phrasing/formatting for general-audience consumption.
I usually think people think better if they generally aim their thoughts at one person at a time. People lose their brains and get eaten by language games if their intellectual output is consistently too impersonal.
Also, I think if I were somebody else, I would appreciate me for sharing a message which I₁ mainly intended for Niplav, as long as I₂ managed to learn something interesting from it. So if I₁ think it’s positive for me₂ to write the post, I₁ think I₁ should go ahead. But I’ll readjust if anybody says they dislike it. : )
In nature, you can imagine species undergoing selection on several levels / time-horizons. If long-term fitness-considerations for genes differ from short-term considerations, long-term selection (let’s call this “longscopic”) may imply net fitness-advantage for genes which remove options wrt climbing the shortscopic gradient.
Meiosis as a “veil of cooperation”
Holly suggests this explains the origin of meiosis itself. Recombination randomizes which alleles you end up with in the next generation so it’s harder for you to collude with a subset of them. And this forces you (as an allele hypothetically planning ahead) to optimize/cooperate for the benefit of all the other alleles in your DNA.[1] I call it a “veil of cooperation”[2], because to works by preventing you from “knowing” which situation you end up in (ie, it destroys options wrt which correlations you can “act on” / adapt to).
Compare that to, say, postsegregational killing mechanisms rampant[3] in prokaryotes. Genes on a single plasmid ensure that when the host organism copies itself, any host-copy that don’t also include a copy of the plasmid are killed by internal toxins. This has the effect of increasing the plasmid’s relative proportion in the host species, so without mechanisms preventing internal misalignment like that, the adaptation remains stable.
There’s constant fighting in local vs global & shortscopic vs longscopic gradients all across everything, and cohesive organisms enforce global/long selection-scopes by restricting the options subcomponents have to propagate themselves.
Generalization in the brain as an alignment mechanism against shortscopic dimensions of its reward functions (ie prevents overfitting)
Another example: REM-sleep & episodic daydreaming provides constant generalization-pressure for neuremic adaptations (learned behaviors) to remain beneficial across all the imagined situations (and chaotic noise) your brain puts them through. Again an example of a shortscopic gradient constantly aligning to a longscopic gradient.
Some abstractions for thinking about internal competition between subdimensions of a global gradient
For example, you can imagine each set of considerations as a loss gradient over genetic-possibility-space, and the gradients diverging from each other on specific dimensions. Points where they intersect from different directions are “pleiotropic/polytelic pinch-points”, and represent the best compromise geneset for both gradients—sorta like an equilibrium price in a supply-&-demand framework.
To take the economics-perspective further: if a system (an economy, a gene pool, a brain, whatever) is at equilibrium price wrt the many dimensions of its adaptation-landscape[4] (whether the dimensions be primary rewards or acquired proxies), then globally-misaligned local collusions can be viewed as inframarginal trade[5]. Thus I find a #succinct-statement from my notes:
(Thanks for prompting me to rediscover it!)
So, take a brain-example again: My brain has both shortscopic and longscopic reward-proxies & behavioral heuristics. When I postpone bedtime in order to, say, get some extra work done because I feel behind; then the neuremes representing my desire to get work done now are bidding for decision-weight at some price[6], and decision-weight-producers will fulfill the trades & provide up to equilibrium. But unfortunately, those neuremes have cheated the market by isolating the bidding-war to shortscopic bidders (ie enforced a particularly narrow perspective), because if they hadn’t, then the neuremes representing longscopic concerns would fairly outbid them.[7]
(Note: The economicsy thing is a very incomplete metaphor, and I’m probably messing things up, but this is theory, so communicating promising-seeming mistakes is often as helpfwl as being correct-but-slightly-less-bold.)
ie, it marginally flattens the intra-genomic competition-gradient, thereby making cooperative fitness-dimensions relatively steeper.
from “veil of ignorance”.
or at least that’s the word they used… I haven’t observed this rampancy directly.
aka “loss-function”
Inframarginal trade: Trade in which producers & consumers match at off-equilibrium price, and which requires the worse-off party to not have the option of getting their thing cheaper at the global equilibrium-price. Thus it reflects a local-global disparity in which trades things are willing to make (ie which interactions are incentivized).
The “price” in this case may be that any assembly of neurons which “bids” for relevancy to current activity takes on some risk of depotentiation if it then fails synchronize. That is, if its firing rate slips off the harmonics of the dominant oscillations going on at present, and starts firing into the STDP-window for depotentiation.
If they weren’t excluded from the market, bedtime-maintenance-neuremes would outbid working-late-neuremes, with bids reflecting the brain’s expectation that maintaining bedtime has higher utility long-term compared to what can be greedily grabbed right now. (Because BEDTIME IS IMPORTANT!) :p