I would spend 2-5 mornings per week just staying in bed (while dark) for 1-5 hours while thinking inside my own head.
Same about this working, wrote something similar in here. (Maybe this needs its own post instead of two comments by me that few saw given it works for both of us)
If I imagine a counterfactual version of me who regularly does for long times (e.g for most of their waking life), they are much more likely to succeed. I think I might need to set a system to keep me in such a state (like locking any computers in a timed box, so I can do nothing but think alone).
Oho! Yes, there’s something uniqueish about thinking-in-bed compared to alternatives. I’ve also had nonstop 5-9h (?) sessions of thinking aided by scribbling in my (off-PC) notebooks, and it’s different. The scribbling takes a lot of time if I want to write down an idea on a note to remember, and that can distract me. But it’s also better in obvious ways.
In general, brains are biased against tool-use (see hastening of subgoal completion), so I aspire to learn to use tools correctly. Ideally, I’d use the PC to its full potential without getting distracted. But atm, just sitting at the PC tends to supplant my motivation to think hard and long about a thing (e.g. after 5m of just thinking, my body starts to crave pushing buttons or interacting with the monitor or smth), and I use the tools (including RemNote) very suboptimally.
Same[1]. Would you want to write a short post about this? I think you could better than I could, judging by these two paragraphs you just wrote.
(I edited into my reply: “(Maybe this needs its own post instead of two comments by me that few saw given it works for both of us)”)
Here’s the summary I was writing for a not-yet-existent post
Summary: (i) Follow a policy of trying not to point your mind at things unrelated to alignment so your brain defaults to alignment-related cognition when nothing requires its immediate attention. (ii) If your mind already does that, good; now turn off all the lights, try to minimize sound, and lay in bed. Stay there for at least an hour. I predict you’ll have new relevant insights in this state. (iii) This works very well for me, but I don’t know if it will for others; but you only need to test it once.
I hope my favorite alignment theorists see this and do it intentionally.
apart from wanting to learn to use computers without becoming distracted, I think that would be really hard for my mind and I should just usually avoid them
Summary: (i) Follow a policy of trying not to point your mind at things unrelated to alignment so your brain defaults to alignment-related cognition when nothing requires its immediate attention. (ii) If your mind already does that, good; now turn off all the lights, try to minimize sound, and lay in bed.
I really appreciate your willingness to think “extreme” about saving the world. Like, if you’re trying to do an extremely hard thing, obviously you’d want to try to minimize the effort you spend not-doing that thing. All sources of joy are competitive reward-events in your brain. Either try to localize joy-sources to what you want yourself to be doing, or tame them to be in service of that (like, I eat biscuits and chocolate with a Strategy! :p).
...But also note that forcing yourself to do thing X can and often will backfire[1], unless you’re lucky or you’ve somehow figured out how to do forcing correctly (I haven’t).
Also, regarding making a post: Sorry, probably not wish do! And the thinking-in-bed thing is mostly a thing I believe due to extensive experience trying, so it’s not something I have good theoretical arguments for. That is, the arguments wouldn’t have sufficiently convinced a version of myself that hadn’t already experienced trying.
But also note that forcing yourself to do thing X can and often will backfire[1]
I must have mis-worded the first sentence. I guess it’s hard for me to write advice without it being read as ‘naively maximize this in a way that leads to burnout instead of actually maximizing it’, cause I modified the original phrasing away from ‘minimize’ to ‘trying not to’ to try to avoid that interpretation.
I just try to add that disclaimer whenever I talk about these things because I’m extra-worried that ppl will be inspired by my example to jump straight into a severe program of self-deprivation without forethought. My lifestyle is objectively “self-deprivational” relative to most altruists, in a sense, so I’m afraid of being misunderstood as an inspiration for doing things which makes my reader unhappy. 🍵
Ah, forgot to reply to “What does practical things mean?”
Recently it’s involved optimizing my note-taking process, and atm it involves trying to find a decent generalizable workflow for benefiting from AI assistance. Concretely, this has involved looking through a bunch of GitHub repos and software, trying to understand ➀ what’s currently technologically possible (← AutoCodeRover example), ➁ what might become possible within reasonable time before civilizational deadline, ➂ what is even desirable to introduce into my workflow in the first place.
I want to set myself up such that I can maximally benefit from increasing AI capabilities. I’m excited about low-code platforms for LLM-stacks[1], and LLM-based programming languages. The latter thing, taken to its limit, could be called something like a “pseudocode interpreter” or “fuzzy programming language”. The idea is to be able to write a very high-level specification for what you wish to do, and have the lower-level details ironed out by LLM agents. I want my code to be degenerate, in the sense that every subcomponent automatically adjusts itself to fulfil niches that are required for my system to work (this is a bad explanation, and I know it).
The immediate next thing on my todo-list is just… finding a decent vscode extension for integrating AI into whatever I do. I want to be able to say “hey, AI, could you boot up this repository (link) on my PC, and test whether it does thing X?” and have it just do that with minimal confirmation-checks required on my part.
I started trying to begin to make the first babysteps of a draft of something like this for myself via a plugin for Obsidian Canvas in early 2023[2], but then realized other people were gonna build something like this anyway, and I could benefit from their work whenever they made it available.
Thinking high level about what this could look like, but left the project bc I don’t actually know how to code (shh), and LLMs were at that point ~useless for fixing my shortcomings for me.
I’m simply a vampire, you see.
Same about this working, wrote something similar in here. (Maybe this needs its own post instead of two comments by me that few saw given it works for both of us)
If I imagine a counterfactual version of me who regularly does for long times (e.g for most of their waking life), they are much more likely to succeed. I think I might need to set a system to keep me in such a state (like locking any computers in a timed box, so I can do nothing but think alone).
What does practical things mean?
Oho! Yes, there’s something uniqueish about thinking-in-bed compared to alternatives. I’ve also had nonstop 5-9h (?) sessions of thinking aided by scribbling in my (off-PC) notebooks, and it’s different. The scribbling takes a lot of time if I want to write down an idea on a note to remember, and that can distract me. But it’s also better in obvious ways.
In general, brains are biased against tool-use (see hastening of subgoal completion), so I aspire to learn to use tools correctly. Ideally, I’d use the PC to its full potential without getting distracted. But atm, just sitting at the PC tends to supplant my motivation to think hard and long about a thing (e.g. after 5m of just thinking, my body starts to crave pushing buttons or interacting with the monitor or smth), and I use the tools (including RemNote) very suboptimally.
Same[1]. Would you want to write a short post about this? I think you could better than I could, judging by these two paragraphs you just wrote.
(I edited into my reply: “(Maybe this needs its own post instead of two comments by me that few saw given it works for both of us)”)
Here’s the summary I was writing for a not-yet-existent post
apart from wanting to learn to use computers without becoming distracted, I think that would be really hard for my mind and I should just usually avoid them
I really appreciate your willingness to think “extreme” about saving the world. Like, if you’re trying to do an extremely hard thing, obviously you’d want to try to minimize the effort you spend not-doing that thing. All sources of joy are competitive reward-events in your brain. Either try to localize joy-sources to what you want yourself to be doing, or tame them to be in service of that (like, I eat biscuits and chocolate with a Strategy! :p).
...But also note that forcing yourself to do thing X can and often will backfire[1], unless you’re lucky or you’ve somehow figured out how to do forcing correctly (I haven’t).
Also, regarding making a post: Sorry, probably not wish do! And the thinking-in-bed thing is mostly a thing I believe due to extensive experience trying, so it’s not something I have good theoretical arguments for. That is, the arguments wouldn’t have sufficiently convinced a version of myself that hadn’t already experienced trying.
There’s probably something better to link here, but I can’t think of it atm.
I must have mis-worded the first sentence. I guess it’s hard for me to write advice without it being read as ‘naively maximize this in a way that leads to burnout instead of actually maximizing it’, cause I modified the original phrasing away from ‘minimize’ to ‘trying not to’ to try to avoid that interpretation.
I just try to add that disclaimer whenever I talk about these things because I’m extra-worried that ppl will be inspired by my example to jump straight into a severe program of self-deprivation without forethought. My lifestyle is objectively “self-deprivational” relative to most altruists, in a sense, so I’m afraid of being misunderstood as an inspiration for doing things which makes my reader unhappy. 🍵
Ah, forgot to reply to “What does practical things mean?”
Recently it’s involved optimizing my note-taking process, and atm it involves trying to find a decent generalizable workflow for benefiting from AI assistance. Concretely, this has involved looking through a bunch of GitHub repos and software, trying to understand
➀ what’s currently technologically possible (← AutoCodeRover example),
➁ what might become possible within reasonable time before civilizational deadline,
➂ what is even desirable to introduce into my workflow in the first place.
I want to set myself up such that I can maximally benefit from increasing AI capabilities. I’m excited about low-code platforms for LLM-stacks[1], and LLM-based programming languages. The latter thing, taken to its limit, could be called something like a “pseudocode interpreter” or “fuzzy programming language”. The idea is to be able to write a very high-level specification for what you wish to do, and have the lower-level details ironed out by LLM agents. I want my code to be degenerate, in the sense that every subcomponent automatically adjusts itself to fulfil niches that are required for my system to work (this is a bad explanation, and I know it).
The immediate next thing on my todo-list is just… finding a decent vscode extension for integrating AI into whatever I do. I want to be able to say “hey, AI, could you boot up this repository (link) on my PC, and test whether it does thing X?” and have it just do that with minimal confirmation-checks required on my part.
I started trying to begin to make the first babysteps of a draft of something like this for myself via a plugin for Obsidian Canvas in early 2023[2], but then realized other people were gonna build something like this anyway, and I could benefit from their work whenever they made it available.
Thinking high level about what this could look like, but left the project bc I don’t actually know how to code (shh), and LLMs were at that point ~useless for fixing my shortcomings for me.