It seems possibly quite important to this experience that you not have electronics or other “bed activities” that you can be doing (depending on your goal here)
Maybe something like “Don’t have anything within arms reach of you bed” so there’s no particular slope towards reading or electronics or whatnot, and if you do whole-heartedly start reading, you first had to get out of bed and grab the book.
In theory you might, but in practice you can’t. Distraction-avoidant behavior favors things that you can get into quickly, on the order of seconds—things like checking for Facebook notifications, or starting a game which has a very fast load time. Most intellectual work has a spinup, while you recreate mental context, before it provides rewards, so distraction-avoidant behavior doesn’t choose it.
Hmmm..I think personal experience tells me that distraction-avoidant behaviour will still choose intellectual work, as long as it is quicker than the alternative.
I might choose a game over writing a LW shortform but I will still choose a LW shortform over writing a novel.
There’s a second-level to this that I think the game would need to reach in order to work. One worry I’d have is that in competitive play, the response might not be a flourishing of interesting creative programming strategies, but “everyone just copies the best strategies and builds macros for them”.
The ideal version of this IMO would have some properties where the gameplay is varied enough that there are different higher-order programming things you’ll need to figure out on the fly.
That said, this exists, and might be kinda what you want:
It would be a nice addition to games, if instead of having a point where they can get boring (after mastery has been achieved), instead having another level where tools (or tools for making tools) become gradually available to assist, and eventually replace, the player.
In my experience, the motion that seems to prevent mental crowding-out is intervening on the timing of my thinking: if I force myself to spend longer on a narrow question/topic/idea than is comfortable, eg with a timer, I’ll eventually run out of cached thoughts and spot things I would have otherwise missed.
I’ve found the “set a 5 minute timer” meme to not-quite-work because it takes me like 15 minutes just to get all my cached thoughts out, before I get to anything original. But yeah this basic idea here is a big part of my “actually thinking for real” toolkit.
Anna writes about bucket errors . Attempted summary: sometimes two facts are mentally tracked by only one variable; in that case, correctly updating the belief about one fact can also incorrectly update the belief about the other fact, so it is sometimes epistemic to flinch away from the truth of the first fact (until you can create more variables to track the facts separately).
There’s a conjugate error: two actions are bound together in one “lever”.
For example, I want to clean my messy room. But somehow it feels pointless / tiring, even before I’ve started. If I just started cleaning anyway, I’d get bogged down in some corner, trying to make a bunch of decisions about where exactly to put lots of futzy random objects, tiring myself out and leaving my room still annoyingly cluttered. It’s not that there’s a necessary connection between cleaning my room and futzing around inefficiently; it’s that the only lever I have right now that activates the “clean room” action also activates the “futz interminably” action.
What I want instead is to create a lever that activates “clean room” but not “futz”, e.g. by explicitly noting the possibility to just put futzy stuff in a box and not deal with it more. When I do that, I feel motivated to clean my messy room. I think this explains some “akrasia”.
The general pattern: I want to do X to acheive some goal, but the only way (that I know how right now) to do X is if I also do Y, and doing Y in this situation would be bad. Flinching away from action toward a goal is often about protecting your goals.
Just because someone is right about something or is competent at something, doesn’t mean you have to or ought to: do what they do; do what they tell you to do; do what’s good for them; do what they want you to do; do what other people think that person wants you to do; be included in their plans; be included in their confidence; believe what they believe; believe important what they believe important. If you don’t keep this distinction, then you might have a bucket error about “X is right about / good at Y” and “I have to Z” for some Z mentioned above, and Z might require a bunch of bad stuff, and so you will either not want to admit that X is good at Y, or else you will stop tracking in general when people are good at Y, or stop thinking Y matters (whereas by default you did think Y matters). Meritocracy (rule of the meritorious) isn’t the same thing as… meritognosis(?) (knowing who is meritorious). In general, -cracy is only good in some situations.
This is a good point. I’d do well to remember that repeated phrases stick in the mind: I’m currently on a bit of a reification spree where I’m giving names to a whole bunch of personal concepts (like moods, mental tools, etc) and since I would like these phrases to stick in the mind I think I shall repeat them.
I feel like voting is polluted because there’s a correlation between goodness and not doing non-epistemic retaliatory voting. I don’t have any suggestions for solving this. Besides eigenvoting.
It occurred to me that LessWrong could have a karma prediction market / karma bets. It looks like this has been suggested a few times before, years ago (see search results: https://www.lesswrong.com/search?query=karma%20bets ), so I’ll just make this note to bump the idea back up a bit.
Crazy idea: you’re not allowed to downvote without either writing an explanation of why, or pressing agree on someone else’s explanation of why they downvoted. Or some variation of that.
It doesn’t work to just ask, what effect would it have on the world, if everyone like me made decisions according to this rule. You also have to ask, how would the rest of the world respond. If you bail out the banks, you also call into existence those bankers who take advantage of banks being bailed out. If you give charity, you call into existence those charities who take advantage of free money being given out. Your behavior is always simultaneously both responding to a niche and also creating niches.
Observation: looking at computer screens causes a feeling like the “burning” of “burning out”: a tenseness, buzziness, thirsty, strained, insomniac. But it stops doing that if I switch from looking at computer screens, to looking at computer screens *in order to do some particular thing that I care about*.
[I use plenty of blue-blocking.]
Hypothesis: screens are bad not intrinsically, but because they are “activating” (maybe because they’re glowy and colorful and super-responsive and connect you to everything and make every activity and stimulating content instantly available). Because they are “activating”, but in some way other than by being deeply motivating, they Goodhart apart activation from deep caring. So your brain is highly activated without caring, so it’s put in a high-time-preference mode, because there’s nothing deeply guiding your activity but you still have a lot of local energy that wants to execute actions or be stimulated with content to process.
Thinking is like kicking a rock down a lane as you walk. If the object is oddly shaped, it may tumble oddly and go off in some oblique direction even if you impelled it forcefully straight. Without care, you’re liable to leave the object by the wayside and replace it with another, or with nothing. Tendencies of the object’s motion are produced both by the landscape—the slopes and the textures—and by the way you impel it, in big or little steps, with topspin or sidespin. The object may get stuck in a pothole or by the curb, and there’s no guarantee you’ll have the patience, or the small-foot-ness, needed to free it. People may look at you funny, as though you’re acting like a child, and you’re certain to leave the project behind without a bit of obliviousness and stand-offishness in you. It’s not clear whether you’re accomplishing anything with each step, let alone whether there is or what might be the ultimate payoff.
So I guess the issue here is having a tool which parses and edits PDFs to insert hyperlinks. That’s hard. Even if you solve the lookup problem by going through something like Semantic Scholar (the way I use https://ricon.dev/ on gwern.net for reverse citation search), PDFs aren’t made for this: when you look at a bit of text which is the name of a book or paper, it may not even be text, it may just be an image… Plus, your links will die. You shouldn’t trust any of those sites to stay up long-term at the exact URLs they are at.
About links dying; One way to solve this would be if we used peer-to-peer networks for documents like PDFs. I’m excited about dat protocol for things like this, though it will need more popularity of course.
https://docs.datproject.org/docs/intro
It seems like our current URL system is quite poor in comparison.
If some of our measure is in a simulation that’s being run to determine whether our measure in real worlds will acausally bargain to get gains from trade, it’s maybe a defection against the bargaining process to force the universe to provide a lot of compute for us (e.g. by running an intergalactic civilization that’s crypographically verified to actually be running), before we’ve done the bargaining, or at the very least legibly truly precommitted to a bargaining process. Otherwise we force simulators to either waste a lot of resources simulating us, or else give up on bargaining with us altogether. (We’d not even necessarily get value from those resources, if e.g. there’s an initial period of scrambling to expand into the lightcone before the party starts.)
Capitalism is good, anarchy is the default political position. A good argument against anarchism is “but what if someone forms an army”, or in other words “we can’t just stop punching ourselves”. A lot of evil seems strictly downstream of having X-archy / X-cracy, for any value of X. Power corrupts, as they say, including democratic power. But it’s not true universally: autonomous power doesn’t corrupt, relative to its own values.
Another argument against anarchy is, “but we have to enforce rights”. It makes sense to have larger scoped laws, even global laws, as a clarification of an implicit threat by almost everyone. But that’s a far cry from law as we have it. Law is better than war, but it’s worse than freedom. The idea of Law seems like mostly a cover of legitimacy for imperialism / democracy / universalism / anti-experimentation; local customary law is a more natural sort of thing—what to do if two guys are having a fight. I’d rather there were many tiny states, as some have suggested. To put it another way, universalism—trying to maximize agreement—seems to have failed, never worked in the first place, and never even plausibly seemed workable in the first place.
States are militaries parasitic on producers. Parasites are suicidal (or rather, matricidal—destroying their own substrate, like a forest fire), or at best symbiotic after victory (like a conqueror who has plundered until there’s nothing left to do but try to help the peasants produce more loot). Democide is suicide, so the 20th century was the century of suicide, at least if you believe in states. Universalism is a worthwhile project, but it’s only possible between free autonomous agents who wish to live; parasites don’t wish to live, and hosts to parasites aren’t autonomous, and agents living under false Law aren’t free; “universalism” among unfree agents is imperialism, “universalism” among hosts to parasites is suicide, and parasites don’t think and so aren’t candidates for participating in universalism.
1. Growth quickens. 2. People notice, and are more willing to lend capital. 3. Take out too many loans. 4. Use your borrowed money to capture the apparatus that would make you pay your debt. 5. Cancel your debt.
Reinforcing based on naively extrapolated trajectories produces double binds. We have a reinforcer R and an agent A. R doesn’t want A to be too X or too not-X. Whenever A does something that’s uncommonly X-ish, R notices that A seems to be shifting more towards X-ishness in general. If this shift continues as a trajectory, then A will end up way too X-ish. So to head that off, R negatively reinforces A. Likewise, R punishes anything that’s uncommonly not-X-ish. As an agent, A is trying to figure out which trajectory to be on. So R isn’t mistaken that A is often putting itself on trajectories which naively imply a bad end state. But, A is put in an impossible situation. R must model that R and A will continue their feedback cycle in the future.
Amnesia. When something becomes unready-to-hand for the first time, it becomes newly available for decoupled thought, and in particular, memory. When we ourselves become unready-to-hand, we’re presented with the possibility of knowing what we are. Different kinds of things are more or less available for understanding (sensibility, memory, significance, use) to gather around them. Whenever we try and fail to understand something, we have a choice what to do with toeholds left over from our aborted forays: have them dissolve, or have them continue to gather understanding. For example: “I can’t figure out how to fix the table because it involves measurements and arithmetic, and I’m bad at math so it would be a waste to try to learn fixing tables” vs. “I don’t know how to fix the table, so I’ll stop trying now, and I’ll seize chances I get to practice easier arithmetic” (not necessarily as explicit thoughts of course). Since toeholds strengthen each other, dissolution of toeholds amplifies dissolution of toeholds. Since what we are is “big” relative to our understanding, when what we are is unready-to-hand, the default attractor is dissolution of understanding (reset-and-forget). Big, deep, external events are not noticed, and if they are they are not tracked in any detail, and if they are they aren’t understood, and if they are they aren’t remembered. (Whereas events that are well-understood, even if trivial, are easily remembered.) Cf. Samo Burja’s work (remembering the collapse of civilization) and Nietzsche (remembering the death of God). Hitler, for example, is impossible to forget and also as yet impossible to remember. Yak-shaving and poetry could help. Rejecting “unfounded” abstraction is anti-helpful. Saying the same thing about the same thing is more helpful than it seems.
[Meta] I’m dissatisfied with posting on LW, and even though I don’t know of a better place I should maybe look for one. It might be because I expect / hope for some kind of engagement that doesn’t happen. That might be because people aren’t interested in the topics and/or the style of discourse I’m interested in. I find myself blaming the frontpage system, which seems to be set up so that posts that don’t get early upvotes will never be frontpage posts (rather than personal blogposts) on the frontpage, or will be so for a very short time. But, IDK if that actually matters—seems likely that people still wouldn’t be interested in what I’m interested in even if that were different. I also find myself judging the more highly upvoted content as being a mix that includes good stuff and a lot of banal stuff. I can tell stories where people are semi-mindlessly upvoting banal stuff because it sounds good on a GPT-3 level or it’s by someone they like or a feedback loop with being on the frontpage for longer, but mostly that feels like a cover for trying to argue someone into having different tastes, which could be useful but probably not in this context. So mostly, this seems like a difference of interest, which means I should go somewhere else. Not sure. I guess I never really learned how to navigate the internet by “doing the searches that would turn up the good stuff written for people who know how to do the right searches” or whatever it is, and the version of that for blog authors, and now’s as good a time as any.
Silence, a return to thought for the dissembled. Over time, since I was born, each of my motives has gathered around it a system of strategies. Sometimes I’m engaged in taking the world into me, and reprogramming myself in accordance with forward-looking design, which we can call “reasoning”. Some of the strategies of some of my motives interfere with reasoning. Most of these strategies were gathered so that I could speak other than to express my thoughts. Most of those strategies don’t need to interfere with reasoning, if I’m not about to speak. So if I return to a silence, provisionally permanent until my motives newly pull me to speak, then I’ll return to thought, if I’m pulled to think.
Cryonics is awkward in that the only way to think it’s a good idea, is to be willing to put weight down on pure extrapolations (that technology, unless permanently curtailed, will be able to revive vitrified people). What else is like this? In practice people aren’t willing to do this, maybe? E.g. the pandemic, most people weren’t willing to put much weight on conclusions drawn from extrapolations of an exponential growth.
So what’s up with “death gives meaning to life”? It seems like a significant obstacle in some of my conversations about cryonics, to getting people to “actually evaluate” (according to me) the likely costs and benefits based on reasoning about the world (rather than e.g. just doing what others do).
Hypothesis: it’s a way of coping with fear of death (of one’s self, and of loved ones), by convincing one’s self (unepistemically) that it wouldn’t actually be good to avoid death. Hypothesis: it’s a distraction from what is, roughly speaking, suicidality stemming from a sense that life is hopeless / intractable / only suffering. Hypothesis: it’s an excuse to not be continually burdened by elders. Hypothesis: it’s a way of saving face; it’s “just something people say in this situation” to make it widely acknowledged that they didn’t act improperly. Hypothesis: it’s a literal belief. If you take someone’s food away, they’ll appreciate more when they have food; likewise with life. Fear of death pushes you to greater heights. (There’s two different things here, really. There’s fear of death; the constant threat of death, which you are forced to continually contend with. And then there’s actual finiteness—everyone dies before age 100.) Hypothesis: it’s a psyop to get people to not criticize Yahweh for allowing pointless death. Hypothesis: it’s a garbled form of a judgement that, given that we’re too technologically far away from preventing involuntary death, we should focus on making the world better for the future while we’re naturally healthy, at the expense of long-shots of life extension. Hypothesis: it’s a garbled form of saying that you like knowing the rough arc of people’s lives, because it creates a social legibility + shared concepts + social fabric (e.g. rituals of birth, marriage, death), and the current arc involves death around age 80-100.
Hypothesis: it’s the “what the hell” effect. I’m going to die. So it’s fine for me to do risky (and fun / meaningful) stuff.
Generally, I’m trying to understand not necessarily what “death gives meaning to life” in any sense “really means”, but rather understand what’s going on with people who say that.
There’s two stances I can take when I want to express a thought so that I can think about it with someone. Both could be called “expressing”. One could be called “pushing-out”: like I’m trying to “get it off my chest”, or “leave it behind / drop it so I can move on to the next thought”. The other is more appropriately “expressing”, as in pressing (copying) something out: I make a copy and give it to the other person, but I’m still holding the original. The former is a habit of mine, but on reflection it’s often a mistake; what I really want is to build on the thought, and the way to do that is to keep it active while also thinking the next thought. The underlying mistake might be incorrectly thinking that the other person can perform the “combine already-generated thoughts” part of the overall progression while I do the “generate individual new thoughts” part. Doing things that way results in a lot of dropped thoughts.
Say Alice has a problem with Bob, but doesn’t know what it is exactly. Then Bob tries to fix it cooperatively by searching in dimension X for settings that alleviate Alice’s problem. If Alice’s problem is actually about Bob’s position on dimension Y, not X, Bob’s activity might appear adversarial: Bob’s actions are effectively goodharting Alice’s sense of whether things are good, in the same way he’d do if he were actually trying to distract Alice from Y.
Generally, apprenticeships should have planned obsolescence. A pattern I’ve seen in myself and others: A student takes a teacher. They’re submissive, in a certain sense—not giving up agency, or harming themselves, or following arbitrarily costly orders, or being overcredulous; but rather, a narrow purely cognition-allocating version of assuming a low-status stance: deferring to local directions of attention by the teacher, provisionally accepting some assumptions, taking a stance of trying to help the teacher with the teacher’s work. This is good because it enhances the bandwidth and depth of transmission of tacit knowledge from the teacher.
But, for many students, it shouldn’t be the endpoint of their development. At some point they should be questioning all assumptions, directing their attention and motivation on all levels, being the servant of their own plans. When this is delayed, e.g. the teacher or the student or something else is keeping the student within a fixed submissive role, the student is stunted, bitter, wasted, restless, jerked around, stagnant. In addition to lines of retreat from social roles, have lines of developmental fundamental change.
Say Alice is making some point to Bob, and Carol is listening and doesn’t like the point and tries to stop Alice from making the point to Bob. What might be going on? What is Carol trying to do, and why? She might think Alice is lying / disinforming—basing her arguments on false information or invalid arguments with false conclusions. But often that’s not what Carol reports; rather, even if Alice’s point is true and her arguments are valid reasoning from true information, and Carol could be expected to know that or at least not be so sure that’s not the case, Carol still wants to stop Alice from making the point. It’s a move in a “culture war”.
But what does that even mean? We might steelman Carol as implicitly working from an assumption like: maybe Alice’s literal, decoupled point is true; but no one’s a perfect decoupler, and so Bob might still make mistaken inferences from Alice’s true point, leading Bob to do bad things and spread disinformation. Another interpretation is more Simulacra: the claims have no external meaning, it’s a war for power over the narrative, and you want to say your side’s memes and block the other side’s memes.
Here’s a third interpretation, close to the Simulacra one, but with a clarification: maybe part of what’s going on, is that Bob does know how to check local consistency of his ideology, even though he lacks the integrative motive or skill to evaluate his whole position by modeling the world. So Bob is going to copy one or another ideology being presented. From within the reach of Bob’s mind, the conceptual vocabularies of opposed ideologies don’t have many shared meanings, even though on their own they are coherent and describe at least some of the world recognizably well. So there’s an exclusion principle: since Bob can’t assimilate the concepts of an ideology opposed to his into his vocabulary, unless given a large activation push, Bob will continue gaining fluency in his current vocabulary while the other vocabulary bounces off of him. However, talking to someone is enough activation energy to at least gain a little fluency, if only locally and temporarily, with their vocabulary. Carol may be worried that if there’s too many instances of various Alices successfully explaining points to Bob, then Bob will get enough fluency to be “over the hump” and will start snowballing more fluency in the opposing ideology, and eventually might switch loyalties.
Persian messenger: “Listen carefully, Leonidas. Xerxes conquers and controls everything he rests his eyes upon. He leads an army so massive it shakes the ground with its march, so vast it drinks the rivers dry. All the God-King Xerxes requires is this: a simple offering of earth and water. A token of Sparta’s submission to the will of Xerxes.”
[...]
Persian messenger: “Choose your next words carefully, Leonidas. They may be your last as king.”
[...]
Leonidas: “Earth and water… You’ll find plenty of both down there.” [indicates the well with his sword]
Persian messenger: “No man, Persian or Greek, no man threatens a messenger!”
Leonidas: “You bring the crowns and heads of conquered kings to my city’s steps! You insult my queen. You threaten my people with slavery and death! Oh, I’ve chosen my words carefully, Persian, while yours are lashed from your lips by the whip of your God-King. I’ll give you a final chance to live with justice: give up your fearful allegiance to your slavemaster Xerxes, do not speak his threats for him any more, and come live in Greece as a free man.”
Persian messenger: “This is blasphemy! This is madness!”
Leonidas: “Madness? THIS IS SPARTA!” [kicks the Persian messenger into the deep well]
Consider the agent that wants to maximize amount of paperclips produced next week. Under the usual formalism, it has stable preferences. Under your proposed formalism, it has changing preferences—on Tuesday it no longer cares about amount of production on Monday. So it seems like this formalism loses information about stability. So I don’t see the point.
I think a counterexample to “you should not devote cognition to achieving things that have already happened” is being angry at someone who has revealed they’ve betrayed you, which might acause them to not have betrayed you.
x
This actually seems like a really, really good idea. Thanks!
x
It seems possibly quite important to this experience that you not have electronics or other “bed activities” that you can be doing (depending on your goal here)
x
Maybe something like “Don’t have anything within arms reach of you bed” so there’s no particular slope towards reading or electronics or whatnot, and if you do whole-heartedly start reading, you first had to get out of bed and grab the book.
x
Relevant study: “Smartphone Dependency & Consciousness” (Srivinas & Faiola 2014)
I find this wildly untrue, although I will try it.
x
Got it. Thank you for the suggestions; we’ll see!
Seems like someone went through my top-level posts and strong downvoted them.
x
Can’t you distract yourself with intellectual work?
In theory you might, but in practice you can’t. Distraction-avoidant behavior favors things that you can get into quickly, on the order of seconds—things like checking for Facebook notifications, or starting a game which has a very fast load time. Most intellectual work has a spinup, while you recreate mental context, before it provides rewards, so distraction-avoidant behavior doesn’t choose it.
Hmmm..I think personal experience tells me that distraction-avoidant behaviour will still choose intellectual work, as long as it is quicker than the alternative.
I might choose a game over writing a LW shortform but I will still choose a LW shortform over writing a novel.
x
There’s a second-level to this that I think the game would need to reach in order to work. One worry I’d have is that in competitive play, the response might not be a flourishing of interesting creative programming strategies, but “everyone just copies the best strategies and builds macros for them”.
The ideal version of this IMO would have some properties where the gameplay is varied enough that there are different higher-order programming things you’ll need to figure out on the fly.
That said, this exists, and might be kinda what you want:
https://screeps.com/
Have you played Factorio?
It would be a nice addition to games, if instead of having a point where they can get boring (after mastery has been achieved), instead having another level where tools (or tools for making tools) become gradually available to assist, and eventually replace, the player.
x
In my experience, the motion that seems to prevent mental crowding-out is intervening on the timing of my thinking: if I force myself to spend longer on a narrow question/topic/idea than is comfortable, eg with a timer, I’ll eventually run out of cached thoughts and spot things I would have otherwise missed.
I’ve found the “set a 5 minute timer” meme to not-quite-work because it takes me like 15 minutes just to get all my cached thoughts out, before I get to anything original. But yeah this basic idea here is a big part of my “actually thinking for real” toolkit.
__Levers error__.
Anna writes about bucket errors . Attempted summary: sometimes two facts are mentally tracked by only one variable; in that case, correctly updating the belief about one fact can also incorrectly update the belief about the other fact, so it is sometimes epistemic to flinch away from the truth of the first fact (until you can create more variables to track the facts separately).
There’s a conjugate error: two actions are bound together in one “lever”.
For example, I want to clean my messy room. But somehow it feels pointless / tiring, even before I’ve started. If I just started cleaning anyway, I’d get bogged down in some corner, trying to make a bunch of decisions about where exactly to put lots of futzy random objects, tiring myself out and leaving my room still annoyingly cluttered. It’s not that there’s a necessary connection between cleaning my room and futzing around inefficiently; it’s that the only lever I have right now that activates the “clean room” action also activates the “futz interminably” action.
What I want instead is to create a lever that activates “clean room” but not “futz”, e.g. by explicitly noting the possibility to just put futzy stuff in a box and not deal with it more. When I do that, I feel motivated to clean my messy room. I think this explains some “akrasia”.
The general pattern: I want to do X to acheive some goal, but the only way (that I know how right now) to do X is if I also do Y, and doing Y in this situation would be bad. Flinching away from action toward a goal is often about protecting your goals.
Just because someone is right about something or is competent at something, doesn’t mean you have to or ought to: do what they do; do what they tell you to do; do what’s good for them; do what they want you to do; do what other people think that person wants you to do; be included in their plans; be included in their confidence; believe what they believe; believe important what they believe important. If you don’t keep this distinction, then you might have a bucket error about “X is right about / good at Y” and “I have to Z” for some Z mentioned above, and Z might require a bunch of bad stuff, and so you will either not want to admit that X is good at Y, or else you will stop tracking in general when people are good at Y, or stop thinking Y matters (whereas by default you did think Y matters). Meritocracy (rule of the meritorious) isn’t the same thing as… meritognosis(?) (knowing who is meritorious). In general, -cracy is only good in some situations.
x
Hmm. I think I basically already did the first thing.
x
This is a good point. I’d do well to remember that repeated phrases stick in the mind: I’m currently on a bit of a reification spree where I’m giving names to a whole bunch of personal concepts (like moods, mental tools, etc) and since I would like these phrases to stick in the mind I think I shall repeat them.
Cowards going around downvoting without making arguments.
I feel like voting is polluted because there’s a correlation between goodness and not doing non-epistemic retaliatory voting. I don’t have any suggestions for solving this. Besides eigenvoting.
It occurred to me that LessWrong could have a karma prediction market / karma bets. It looks like this has been suggested a few times before, years ago (see search results: https://www.lesswrong.com/search?query=karma%20bets ), so I’ll just make this note to bump the idea back up a bit.
Crazy idea: you’re not allowed to downvote without either writing an explanation of why, or pressing agree on someone else’s explanation of why they downvoted. Or some variation of that.
It doesn’t work to just ask, what effect would it have on the world, if everyone like me made decisions according to this rule. You also have to ask, how would the rest of the world respond. If you bail out the banks, you also call into existence those bankers who take advantage of banks being bailed out. If you give charity, you call into existence those charities who take advantage of free money being given out. Your behavior is always simultaneously both responding to a niche and also creating niches.
Sylvester McMonkey von Neumann
Observation: looking at computer screens causes a feeling like the “burning” of “burning out”: a tenseness, buzziness, thirsty, strained, insomniac. But it stops doing that if I switch from looking at computer screens, to looking at computer screens *in order to do some particular thing that I care about*.
[I use plenty of blue-blocking.]
Hypothesis: screens are bad not intrinsically, but because they are “activating” (maybe because they’re glowy and colorful and super-responsive and connect you to everything and make every activity and stimulating content instantly available). Because they are “activating”, but in some way other than by being deeply motivating, they Goodhart apart activation from deep caring. So your brain is highly activated without caring, so it’s put in a high-time-preference mode, because there’s nothing deeply guiding your activity but you still have a lot of local energy that wants to execute actions or be stimulated with content to process.
Thinking is like kicking a rock down a lane as you walk. If the object is oddly shaped, it may tumble oddly and go off in some oblique direction even if you impelled it forcefully straight. Without care, you’re liable to leave the object by the wayside and replace it with another, or with nothing. Tendencies of the object’s motion are produced both by the landscape—the slopes and the textures—and by the way you impel it, in big or little steps, with topspin or sidespin. The object may get stuck in a pothole or by the curb, and there’s no guarantee you’ll have the patience, or the small-foot-ness, needed to free it. People may look at you funny, as though you’re acting like a child, and you’re certain to leave the project behind without a bit of obliviousness and stand-offishness in you. It’s not clear whether you’re accomplishing anything with each step, let alone whether there is or what might be the ultimate payoff.
x
PDFs support hyperlinks: they can define anchors at arbitrary points within themselves for a hyperlink, and they can hyperlink out. You can even specify a target page in a PDF which doesn’t define any usable anchors (which is dead useful and I use it all the time in references): eg https://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/pdf_open_parameters.pdf#page=5
So I guess the issue here is having a tool which parses and edits PDFs to insert hyperlinks. That’s hard. Even if you solve the lookup problem by going through something like Semantic Scholar (the way I use https://ricon.dev/ on gwern.net for reverse citation search), PDFs aren’t made for this: when you look at a bit of text which is the name of a book or paper, it may not even be text, it may just be an image… Plus, your links will die. You shouldn’t trust any of those sites to stay up long-term at the exact URLs they are at.
About links dying; One way to solve this would be if we used peer-to-peer networks for documents like PDFs. I’m excited about dat protocol for things like this, though it will need more popularity of course. https://docs.datproject.org/docs/intro
It seems like our current URL system is quite poor in comparison.
If some of our measure is in a simulation that’s being run to determine whether our measure in real worlds will acausally bargain to get gains from trade, it’s maybe a defection against the bargaining process to force the universe to provide a lot of compute for us (e.g. by running an intergalactic civilization that’s crypographically verified to actually be running), before we’ve done the bargaining, or at the very least legibly truly precommitted to a bargaining process. Otherwise we force simulators to either waste a lot of resources simulating us, or else give up on bargaining with us altogether. (We’d not even necessarily get value from those resources, if e.g. there’s an initial period of scrambling to expand into the lightcone before the party starts.)
Capitalism is good, anarchy is the default political position. A good argument against anarchism is “but what if someone forms an army”, or in other words “we can’t just stop punching ourselves”. A lot of evil seems strictly downstream of having X-archy / X-cracy, for any value of X. Power corrupts, as they say, including democratic power. But it’s not true universally: autonomous power doesn’t corrupt, relative to its own values.
Another argument against anarchy is, “but we have to enforce rights”. It makes sense to have larger scoped laws, even global laws, as a clarification of an implicit threat by almost everyone. But that’s a far cry from law as we have it. Law is better than war, but it’s worse than freedom. The idea of Law seems like mostly a cover of legitimacy for imperialism / democracy / universalism / anti-experimentation; local customary law is a more natural sort of thing—what to do if two guys are having a fight. I’d rather there were many tiny states, as some have suggested. To put it another way, universalism—trying to maximize agreement—seems to have failed, never worked in the first place, and never even plausibly seemed workable in the first place.
States are militaries parasitic on producers. Parasites are suicidal (or rather, matricidal—destroying their own substrate, like a forest fire), or at best symbiotic after victory (like a conqueror who has plundered until there’s nothing left to do but try to help the peasants produce more loot). Democide is suicide, so the 20th century was the century of suicide, at least if you believe in states. Universalism is a worthwhile project, but it’s only possible between free autonomous agents who wish to live; parasites don’t wish to live, and hosts to parasites aren’t autonomous, and agents living under false Law aren’t free; “universalism” among unfree agents is imperialism, “universalism” among hosts to parasites is suicide, and parasites don’t think and so aren’t candidates for participating in universalism.
So what is happening with the Lambda variant?
1. Growth quickens.
2. People notice, and are more willing to lend capital.
3. Take out too many loans.
4. Use your borrowed money to capture the apparatus that would make you pay your debt.
5. Cancel your debt.
Reinforcing based on naively extrapolated trajectories produces double binds. We have a reinforcer R and an agent A. R doesn’t want A to be too X or too not-X. Whenever A does something that’s uncommonly X-ish, R notices that A seems to be shifting more towards X-ishness in general. If this shift continues as a trajectory, then A will end up way too X-ish. So to head that off, R negatively reinforces A. Likewise, R punishes anything that’s uncommonly not-X-ish. As an agent, A is trying to figure out which trajectory to be on. So R isn’t mistaken that A is often putting itself on trajectories which naively imply a bad end state. But, A is put in an impossible situation. R must model that R and A will continue their feedback cycle in the future.
Amnesia. When something becomes unready-to-hand for the first time, it becomes newly available for decoupled thought, and in particular, memory. When we ourselves become unready-to-hand, we’re presented with the possibility of knowing what we are. Different kinds of things are more or less available for understanding (sensibility, memory, significance, use) to gather around them. Whenever we try and fail to understand something, we have a choice what to do with toeholds left over from our aborted forays: have them dissolve, or have them continue to gather understanding. For example: “I can’t figure out how to fix the table because it involves measurements and arithmetic, and I’m bad at math so it would be a waste to try to learn fixing tables” vs. “I don’t know how to fix the table, so I’ll stop trying now, and I’ll seize chances I get to practice easier arithmetic” (not necessarily as explicit thoughts of course). Since toeholds strengthen each other, dissolution of toeholds amplifies dissolution of toeholds. Since what we are is “big” relative to our understanding, when what we are is unready-to-hand, the default attractor is dissolution of understanding (reset-and-forget). Big, deep, external events are not noticed, and if they are they are not tracked in any detail, and if they are they aren’t understood, and if they are they aren’t remembered. (Whereas events that are well-understood, even if trivial, are easily remembered.) Cf. Samo Burja’s work (remembering the collapse of civilization) and Nietzsche (remembering the death of God). Hitler, for example, is impossible to forget and also as yet impossible to remember. Yak-shaving and poetry could help. Rejecting “unfounded” abstraction is anti-helpful. Saying the same thing about the same thing is more helpful than it seems.
[Meta] I’m dissatisfied with posting on LW, and even though I don’t know of a better place I should maybe look for one. It might be because I expect / hope for some kind of engagement that doesn’t happen. That might be because people aren’t interested in the topics and/or the style of discourse I’m interested in. I find myself blaming the frontpage system, which seems to be set up so that posts that don’t get early upvotes will never be frontpage posts (rather than personal blogposts) on the frontpage, or will be so for a very short time. But, IDK if that actually matters—seems likely that people still wouldn’t be interested in what I’m interested in even if that were different. I also find myself judging the more highly upvoted content as being a mix that includes good stuff and a lot of banal stuff. I can tell stories where people are semi-mindlessly upvoting banal stuff because it sounds good on a GPT-3 level or it’s by someone they like or a feedback loop with being on the frontpage for longer, but mostly that feels like a cover for trying to argue someone into having different tastes, which could be useful but probably not in this context. So mostly, this seems like a difference of interest, which means I should go somewhere else. Not sure. I guess I never really learned how to navigate the internet by “doing the searches that would turn up the good stuff written for people who know how to do the right searches” or whatever it is, and the version of that for blog authors, and now’s as good a time as any.
Silence, a return to thought for the dissembled. Over time, since I was born, each of my motives has gathered around it a system of strategies. Sometimes I’m engaged in taking the world into me, and reprogramming myself in accordance with forward-looking design, which we can call “reasoning”. Some of the strategies of some of my motives interfere with reasoning. Most of these strategies were gathered so that I could speak other than to express my thoughts. Most of those strategies don’t need to interfere with reasoning, if I’m not about to speak. So if I return to a silence, provisionally permanent until my motives newly pull me to speak, then I’ll return to thought, if I’m pulled to think.
Cryonics is awkward in that the only way to think it’s a good idea, is to be willing to put weight down on pure extrapolations (that technology, unless permanently curtailed, will be able to revive vitrified people). What else is like this? In practice people aren’t willing to do this, maybe? E.g. the pandemic, most people weren’t willing to put much weight on conclusions drawn from extrapolations of an exponential growth.
As opposed to reasoning from basically identical cases. I hear things like “when someone’s been revived, then I’ll consider it”.
So what’s up with “death gives meaning to life”? It seems like a significant obstacle in some of my conversations about cryonics, to getting people to “actually evaluate” (according to me) the likely costs and benefits based on reasoning about the world (rather than e.g. just doing what others do).
Hypothesis: it’s a way of coping with fear of death (of one’s self, and of loved ones), by convincing one’s self (unepistemically) that it wouldn’t actually be good to avoid death.
Hypothesis: it’s a distraction from what is, roughly speaking, suicidality stemming from a sense that life is hopeless / intractable / only suffering.
Hypothesis: it’s an excuse to not be continually burdened by elders.
Hypothesis: it’s a way of saving face; it’s “just something people say in this situation” to make it widely acknowledged that they didn’t act improperly.
Hypothesis: it’s a literal belief. If you take someone’s food away, they’ll appreciate more when they have food; likewise with life. Fear of death pushes you to greater heights. (There’s two different things here, really. There’s fear of death; the constant threat of death, which you are forced to continually contend with. And then there’s actual finiteness—everyone dies before age 100.)
Hypothesis: it’s a psyop to get people to not criticize Yahweh for allowing pointless death.
Hypothesis: it’s a garbled form of a judgement that, given that we’re too technologically far away from preventing involuntary death, we should focus on making the world better for the future while we’re naturally healthy, at the expense of long-shots of life extension.
Hypothesis: it’s a garbled form of saying that you like knowing the rough arc of people’s lives, because it creates a social legibility + shared concepts + social fabric (e.g. rituals of birth, marriage, death), and the current arc involves death around age 80-100.
Hypothesis: it’s the “what the hell” effect. I’m going to die. So it’s fine for me to do risky (and fun / meaningful) stuff.
Generally, I’m trying to understand not necessarily what “death gives meaning to life” in any sense “really means”, but rather understand what’s going on with people who say that.
There’s two stances I can take when I want to express a thought so that I can think about it with someone. Both could be called “expressing”. One could be called “pushing-out”: like I’m trying to “get it off my chest”, or “leave it behind / drop it so I can move on to the next thought”. The other is more appropriately “expressing”, as in pressing (copying) something out: I make a copy and give it to the other person, but I’m still holding the original. The former is a habit of mine, but on reflection it’s often a mistake; what I really want is to build on the thought, and the way to do that is to keep it active while also thinking the next thought. The underlying mistake might be incorrectly thinking that the other person can perform the “combine already-generated thoughts” part of the overall progression while I do the “generate individual new thoughts” part. Doing things that way results in a lot of dropped thoughts.
Say Alice has a problem with Bob, but doesn’t know what it is exactly. Then Bob tries to fix it cooperatively by searching in dimension X for settings that alleviate Alice’s problem. If Alice’s problem is actually about Bob’s position on dimension Y, not X, Bob’s activity might appear adversarial: Bob’s actions are effectively goodharting Alice’s sense of whether things are good, in the same way he’d do if he were actually trying to distract Alice from Y.
Generally, apprenticeships should have planned obsolescence. A pattern I’ve seen in myself and others: A student takes a teacher. They’re submissive, in a certain sense—not giving up agency, or harming themselves, or following arbitrarily costly orders, or being overcredulous; but rather, a narrow purely cognition-allocating version of assuming a low-status stance: deferring to local directions of attention by the teacher, provisionally accepting some assumptions, taking a stance of trying to help the teacher with the teacher’s work. This is good because it enhances the bandwidth and depth of transmission of tacit knowledge from the teacher.
But, for many students, it shouldn’t be the endpoint of their development. At some point they should be questioning all assumptions, directing their attention and motivation on all levels, being the servant of their own plans. When this is delayed, e.g. the teacher or the student or something else is keeping the student within a fixed submissive role, the student is stunted, bitter, wasted, restless, jerked around, stagnant. In addition to lines of retreat from social roles, have lines of developmental fundamental change.
Say Alice is making some point to Bob, and Carol is listening and doesn’t like the point and tries to stop Alice from making the point to Bob. What might be going on? What is Carol trying to do, and why? She might think Alice is lying / disinforming—basing her arguments on false information or invalid arguments with false conclusions. But often that’s not what Carol reports; rather, even if Alice’s point is true and her arguments are valid reasoning from true information, and Carol could be expected to know that or at least not be so sure that’s not the case, Carol still wants to stop Alice from making the point. It’s a move in a “culture war”.
But what does that even mean? We might steelman Carol as implicitly working from an assumption like: maybe Alice’s literal, decoupled point is true; but no one’s a perfect decoupler, and so Bob might still make mistaken inferences from Alice’s true point, leading Bob to do bad things and spread disinformation. Another interpretation is more Simulacra: the claims have no external meaning, it’s a war for power over the narrative, and you want to say your side’s memes and block the other side’s memes.
Here’s a third interpretation, close to the Simulacra one, but with a clarification: maybe part of what’s going on, is that Bob does know how to check local consistency of his ideology, even though he lacks the integrative motive or skill to evaluate his whole position by modeling the world. So Bob is going to copy one or another ideology being presented. From within the reach of Bob’s mind, the conceptual vocabularies of opposed ideologies don’t have many shared meanings, even though on their own they are coherent and describe at least some of the world recognizably well. So there’s an exclusion principle: since Bob can’t assimilate the concepts of an ideology opposed to his into his vocabulary, unless given a large activation push, Bob will continue gaining fluency in his current vocabulary while the other vocabulary bounces off of him. However, talking to someone is enough activation energy to at least gain a little fluency, if only locally and temporarily, with their vocabulary. Carol may be worried that if there’s too many instances of various Alices successfully explaining points to Bob, then Bob will get enough fluency to be “over the hump” and will start snowballing more fluency in the opposing ideology, and eventually might switch loyalties.
Persian messenger: “Listen carefully, Leonidas. Xerxes conquers and controls everything he rests his eyes upon. He leads an army so massive it shakes the ground with its march, so vast it drinks the rivers dry. All the God-King Xerxes requires is this: a simple offering of earth and water. A token of Sparta’s submission to the will of Xerxes.”
[...]
Persian messenger: “Choose your next words carefully, Leonidas. They may be your last as king.”
[...]
Leonidas: “Earth and water… You’ll find plenty of both down there.” [indicates the well with his sword]
Persian messenger: “No man, Persian or Greek, no man threatens a messenger!”
Leonidas: “You bring the crowns and heads of conquered kings to my city’s steps! You insult my queen. You threaten my people with slavery and death! Oh, I’ve chosen my words carefully, Persian, while yours are lashed from your lips by the whip of your God-King. I’ll give you a final chance to live with justice: give up your fearful allegiance to your slavemaster Xerxes, do not speak his threats for him any more, and come live in Greece as a free man.”
Persian messenger: “This is blasphemy! This is madness!”
Leonidas: “Madness? THIS IS SPARTA!” [kicks the Persian messenger into the deep well]
x
x
x
Consider the agent that wants to maximize amount of paperclips produced next week. Under the usual formalism, it has stable preferences. Under your proposed formalism, it has changing preferences—on Tuesday it no longer cares about amount of production on Monday. So it seems like this formalism loses information about stability. So I don’t see the point.
x
I think a counterexample to “you should not devote cognition to achieving things that have already happened” is being angry at someone who has revealed they’ve betrayed you, which might acause them to not have betrayed you.