I think this comment is 100% right despite being perhaps maybe somewhat way too modest. It’s more useful to think of sapience as introducing a delta on behavior, rather than a way to execute desired behavior. The second is a classic Straw Vulcan failure mode.
SquirrelInHell
I wonder if all of the CFAR techniques will have different names after you are done with them :) Looking forward to your second and third iteration.
All sounds sensible.
Also, reminds me of the 2nd Law of Owen:
In a funny sort of way, though, I guess I really did just end up writing a book for myself.
[Note: I am writing from my personal epistemic point of view from which pretty much all the content of the OP reads as obvious obviousness 101.]
The reason why people don’t know this is not because it’s hard to know it. This is some kind of common fallacy: “if I say true things that people apparently don’t know, they will be shocked and turn their lives around”. But in fact most people around here have more than enough theoretical capacity to figure out this, and much more, without any help. The real bottleneck is human psychology, which is not able to take certain beliefs seriously without much difficult work at the fundamentals. So “fact” posts about x-risk are mostly preaching to the choir. At best, you get some people acting out of scrupulosity and social pressure, and this is pretty useless.
Of course I still like your post a lot, and I think it’s doing some good on the margin. It’s just that it seems like you’re wasting energy on fighting the wrong battle.
Note to everyone else: the least you can do is share this post until everyone you know is sick of it.
It is a little bit unfair to say that buying 10 bicoins was everything you needed to do. I owned 10 bitcoins, and then sold them at a meager price. Nothing changed as a result of me merely understanding that buying bitcoins was a good idea.
What you really needed was to sit down and think up a strict selling schedule, and also commit to following it. E.g. spend $100 on bitcoin now, and later sell exactly 10% of your bitcoins every time that 10% becomes worth at least $10,000 (I didn’t run the numbers to check if these exact values make sense, but you get the idea).
Upstream of not taking effective action was unwillingness to spend a few hours thinking hard about what would actually be smart to do if the hypothetical proved true.
At grave peril of strawmanning, a first order-approximation to SquirrelInHell’s meta-process (what I think of as the Self) is the only process in the brain with write access, the power of self-modification. All other brain processes are to treat the brain as a static algorithm and solve the world from there.
Let me clarify: I consider it the meta level when I think something like “what trajectory do I expect to have as a result of my whole brain continuing to function as it already tends to do, assuming I do nothing special with the output of the thought process which I’m using right now to simulate myself?”. This simulation obviously includes everyday self-modification which happens as a side-effect of acting (like the 10% shift you describe). The key question is, am I reflectively consistent? Do I endorse the most accurate simulation of myself that I can run?
The meta process is what happens when I want to make sure that I always remain reflectively consistent. Then I conjure up a special kind of self-modification which desires to remember to do itself, and to continue to hold on to enough power to always win. I aspire to make this meta process an automatic part of myself, so that my most accurate simulation of any of my future trajectories already automatically includes self-consistency.
Also: enjoy your CFAR workshop! :)
Humans are not thermostats, and they can do better than a simple mathematical model. The idea of oscillation with decreasing amplitude you mention is well known from control theory, and it’s looking at the phenomenon from a different (and, I dare say, less interesting) perspective.
To put it in another way, there is no additional deep understanding of reality that you could use to tell apart the fourth and the sixth oscillation of a converging mathematical model. If you know the model, you are already there.
[Note: I’m not sure if this was your concern—let me know if what I write below seems off the mark.]
The most accurate belief is rarely the best advice to give; there is a reason why these corrections tend to happen in a certain order. People holding the naive view need to hear the first correction, those who overcompensated need to hear the second correction. The technically most accurate view is the one that the fewest people need to hear.
I invoke this pattern to forestall a useless conversation about whose advice is objectively best.
In fact, I think it would be a good practice to always before giving advice, do your best to trace back to the naive view and count the reversals, and inform your reader on which level you are advising. (This is surprisingly doable if you bother to do it.)
Here we go: the pattern of this conversation is “first correction, second correction, accurate belief” (see growth triplets).
Naive view: “learn from masters”
The OP is the first correction: “learn from people just above you”
Your comment is the second correction: “there are cases where teacher’s advice is better quality”
The accurate belief takes all of this into account: “it’s best learn from multiple people in a way that balances wisdom against accessibility”
Yes! Not just improved, but leading by stellar example :)
Shell, Shield, Staff
People have recently discussed short words from various perspectives. While I was initially not super-impressed by this idea, this post made me shift towards “yeah, this is useful if done just right”.
Casually reading this post on your blog yesterday was enough for the phrase to automatically latch on to the relevant mental motion (which it turns out I was already using a lot), solidify it, make it quicker and more effective, and make me want to use it more.
It has since then been popping up in my consciousness repeatedly, on at least 5 separate occasions after I have completely forgotten about it. Once, taken to the extreme, it moved me directly into a kinda “fresh seeing” or “being a new homunculus” state of mind, where I was looking at a familiar landscape and having long-unthought thoughts in the style of “why do all these people walk? flying would be more useful. why is everything so slow, big, and empty?”.
To summarize: I think this name hit right in the middle of some concept that already badly “wanted” to materialize in my mind, and also it managed to be more short and catchy than what I would have come up with myself.
Good job! Let the HWA be with you!
Your point can partially be translated to “make reasonably close to 1”—this makes the decisions less about what the moderators want, and allows longer chains of passing the “trust buck”.
However, to some degree “a clique moved in that wrote posts that the moderators (and the people they like) dislike” is pretty much the definition of a spammer. If you say “are otherwise extremely good”, what is the standard by which you wish to judge this?
Yes, and also it’s even more general than that—it’s sort of how progress works on every scale of everything. See e.g. tribalism/rationality/post-rationality; thesis/antithesis/synthesis; life/performance/improv; biology/computers/neural nets. The OP also hints at this.
Actionable Eisenhower
This seems to rest on a model of people as shallow, scripted puppets.
“Do you want my advice, or my sympathy?” is really asking: “which word-strings are your password today?” or “which kind of standard social script do you want to play out today?” or “can you help me navigate your NPC conversation tree today?”.
Personally, when someone tries to use this approach on me I am inclined to instantly write them off and never look back. I’m not saying everyone is like me but you might want to be wary of what kind of people you are optimizing yourself for.
I’d add that the desire to hear apologies is itself a disguised status-grabbing move, and it’s prudent to stay wary of it.
While I 100% agree with your views here, and this is by far the most sane opinion on akrasia that I’ve seen in a long time, I’m not convinced that so many people on LW really “get it”. Although to be sure, the distribution of behavior that signals this has significantly shifted since the move to LW2.0.
So overall I am very uncertain, but I still find it more plausible that the reason why the community as a whole stopped talking about akrasia is more like, people run out of impressive-seeming or fresh-seeming things to say about it? While the minority that could have contributed actual real new insights turned away for better reasons.
Big props for posting a book review—that’s always great and in demand. However, some points on (what I think is) good form while doing these:
a review on LW is not an advertisement; try to write reviews in a way that is useful to people who decide to not read the book
I also don’t care to see explicit encouragement to read a book—if what you relate about its content is tempting enough, I expect that I will have the idea to go and read it on my own
Bad typo.
By the way—did I mention that inventing the word “hammertime” was epic, and that now you might just as well retire because there’s no way to compete against your former glory.