(edit: formatting on this appears to have gone all to hell and idk how to fix it! Uh oh!)
(edit2: maybe fixed? I broke out my commentary into a second section instead of doing a spoiler section between each item on the list.)
(edit3: appears fixed for me)
Yep, I can do that legwork!
I’ll add some commentary, but I’ll “spoiler” it in case people don’t wanna see my takes ahead of forming their own, or just general “don’t spoil (your take on some of) the intended payoffs” stuff.
https://www.projectlawful.com/posts/6334 (Contains infohazards for people with certain psychologies, do not twist yourself into a weird and uncomfortable condition contemplating “Greater Reality”—notice confusion about it quickly and refocus on ideas for which you can more easily update your expectations of future experience within the universe you appear to be getting evidence about. “Sanity checks” may be important. The ability to say to yourself “this is a waste of time/effort to think about right now” may also be important.) (This is a section of Planecrash where a lot of the plot-relevant events have already taken place and are discussed, so MAJOR SPOILERS.) (This is the section that “Negative Coalition” tweet came from.)
“No rescuer hath the rescuer. No Lord hath the champion, no mother and no father, only nothingness above.” What is the right way to try to become good at the things Eliezer is good at? Why does naive imitation fail? There is a theme here, one which has corners that appear all over Eliezer’s work—see Final Words for another thing I’d call a corner of this idea. What is the rest? How does the whole picture fit together? Welp. I started with writing a conversation in the form of Godel Escher Bach, or A Semitechnical Introduction to Solomonoff Induction, where a version of me was having a conversation with an internal model of Eliezer I named “Exiezer”—and used that to work my way through connecting all of those ideas in an extended metaphor about learning to craft handaxes. I may do a LessWrong post including it, if I can tie it to an sufficiently high-quality object-level discussion on education and self improvement.
This is a section titled “the meeting of their minds” where Keltham and Carissa go full “secluded setting, radical honesty, total mindset dump.” I think it is one of the most densely interesting parts of the book, and I think represents a few techniques more people should try. “How do you know how smart you really are?” Well, have you ever tried writing a character smarter than you think you are doing something that requires more intelligence than you feel like you have? What would happen if you attempted that? Well, you can have all the time in the world to plan out every little detail, check over your work, list alternatives, study relevant examples/material..m etc. etc. This section has the feeling of people actually attempting at running the race they’ve been practicing for using the crispest versions of the techniques they’ve been iterating on. Additionally, “have you ever attempted to ‘meet minds’ with someone? What sort of skills would you want to single out to practice? What sort of setting seems like it’d work for that?” This section shows two people working through a really serious conflict. It’s a place where their values have come seriously into conflict, and yet, to get more of what they both want, they have to figure out how to cooperate. Also, they’ve both ended up pretty seriously damaged, and they have things they need to untangle.
This is a section called “to earth with science” and… Well, how useful it is depends on how much it’s going to be useful to you to think more critically about the academic/scientific institutions we have on this planet. It’s very much Eliezer doing a psuedo-rant about what’s broken here that echoes the tone of something like Inadequate Equilibria. The major takeaway would be something like the takeaways you get from a piece of accurate satire—the lèse majesté which shatters some of the memes handed down to you by the wiser-than-thou people who grimly say “we know it’s not perfect, but it’s the best we have” and expect you not to have follow-up questions about that type of assertion.
This is my favorite section from “to hell with science.” The entire post is a great lecture about the philosophy and practice of science, but this part in particular touches on a concept I expect to come up in more detail later regarding AIs and agency. One of the cruxes of this whole AI debate is whether you can separate out “intelligence” and “agency”—and this part provides an explanation for why that whole idea is something of a failure to conceptualize these things correctly.
This is Keltham lecturing on responsibility, the design of institutions, and how to critique systems from the lens of someone like a computer programmer. This is where you get some of the juiciest takeaways about Civilization as Eliezer envisions it. The “basic sanity check” of “who is the one person responsible for this” & requisite exception handling is particularly actionable, IMO.
“Learn when/where you can take quick steps and plant your feet on solid ground.” There’s something about feedback loops here, and the right way to start getting good at something. May not be terribly useful to a lot of people, but it stood out as a prescription for people who want to learn something. Invent a method, try to cheat, take a weird shortcut, guess. Then, check whether your results actually work. Don’t go straight for “doing things properly” if you don’t have to.
Keltham on how to arrive at Civilization from first-principles. This is one of the best lectures in the whole series from my perspective. The way it’s framed in the form of a thought-experiment that I could on-board and play with in spare moments.
Hopefully some of these are interesting and useful to you Mir, as well as others here. There’s a ton of other stuff, so I may write a follow-up with more later on if I have more time.
This is awesome, thank you so much! Green leaf indicates that you’re new (or new alias) here? Happy for LW! : )
“But how does Nemamel grow up to be Nemamel? She was better than all her living competitors, there was nobody she could imitate to become that good. There are no gods in dath ilan. Then who does Nemamel look up to, to become herself?”
I first learned this lesson in my youth when, after climbing to the top of a leaderboard in a puzzle game I’d invested >2k hours into, I was surpassed so hard by my nemesis that I had to reflect on what I was doing. Thing is, they didn’t just surpass me and everybody else, but instead continued to break their own records several times over.
Slightly embarrassed by having congratulated myself for my merely-best performance, I had to ask “how does one become like that?”
My problem was that I’d always just been trying to get better than the people around me, whereas their target was the inanimate structure of the problem itself. When I had broken a record, I said “finally!” and considered myself complete. But when they did the same, they said “cool!”, and then kept going. The only way to defeat them, would be by not trying to defeat them, and instead focus on fighting the perceived limits of the game itself.
To some extent, I am what I am today, because I at one point aspired to be better than Aisi.
Two years ago, I didn’t realize that 95% of my effort was aimed at answering what ultimately was other people’s questions. What happens when I learn to aim all my effort on questions purely arising from bottlenecks I notice in my own cognition?
When he knows that he must justify himself to others (who may or may not understand his reasoning), his brain’s background-search is biased in favour of what-can-be-explained. For early thinkers, this bias tends to be good, because it prevents them from bullshitting themselves. But there comes a point where you’ve mostly learned not to bullshit yourself, and you’re better off purely aiming your cognition based on what you yourself think you understand.
I hate how much time my brain (still) wastes on daydreaming and coming up with sentences optimized for impressing people online. What happens if I instead can learn to align all my social-motivation-based behaviours to what someone would praise if they had all the mental & situational context I have, and who’s harder to fool than myself? Can my behaviour then be maximally aligned with [what I think is good], and [what I think is good] be maximally aligned with my best effort at figuring out what’s good?
I hope so, and that’s what Maria is currently helping me find out.
(edit: formatting on this appears to have gone all to hell and idk how to fix it! Uh oh!)
(edit2: maybe fixed? I broke out my commentary into a second section instead of doing a spoiler section between each item on the list.)
(edit3: appears fixed for me)
Yep, I can do that legwork!
I’ll add some commentary, but I’ll “spoiler” it in case people don’t wanna see my takes ahead of forming their own, or just general “don’t spoil (your take on some of) the intended payoffs” stuff.
https://www.projectlawful.com/replies/1743791#reply-1743791
https://www.projectlawful.com/posts/6334 (Contains infohazards for people with certain psychologies, do not twist yourself into a weird and uncomfortable condition contemplating “Greater Reality”—notice confusion about it quickly and refocus on ideas for which you can more easily update your expectations of future experience within the universe you appear to be getting evidence about. “Sanity checks” may be important. The ability to say to yourself “this is a waste of time/effort to think about right now” may also be important.) (This is a section of Planecrash where a lot of the plot-relevant events have already taken place and are discussed, so MAJOR SPOILERS.) (This is the section that “Negative Coalition” tweet came from.)
https://www.projectlawful.com/posts/5826
https://www.projectlawful.com/replies/1778998#reply-1778998
https://www.projectlawful.com/replies/1743437#reply-1743437
https://www.projectlawful.com/replies/1786657#reply-1786657
https://www.projectlawful.com/replies/1771895#reply-1771895
“No rescuer hath the rescuer. No Lord hath the champion, no mother and no father, only nothingness above.” What is the right way to try to become good at the things Eliezer is good at? Why does naive imitation fail? There is a theme here, one which has corners that appear all over Eliezer’s work—see Final Words for another thing I’d call a corner of this idea. What is the rest? How does the whole picture fit together? Welp. I started with writing a conversation in the form of Godel Escher Bach, or A Semitechnical Introduction to Solomonoff Induction, where a version of me was having a conversation with an internal model of Eliezer I named “Exiezer”—and used that to work my way through connecting all of those ideas in an extended metaphor about learning to craft handaxes. I may do a LessWrong post including it, if I can tie it to an sufficiently high-quality object-level discussion on education and self improvement.
This is a section titled “the meeting of their minds” where Keltham and Carissa go full “secluded setting, radical honesty, total mindset dump.” I think it is one of the most densely interesting parts of the book, and I think represents a few techniques more people should try. “How do you know how smart you really are?” Well, have you ever tried writing a character smarter than you think you are doing something that requires more intelligence than you feel like you have? What would happen if you attempted that? Well, you can have all the time in the world to plan out every little detail, check over your work, list alternatives, study relevant examples/material..m etc. etc. This section has the feeling of people actually attempting at running the race they’ve been practicing for using the crispest versions of the techniques they’ve been iterating on. Additionally, “have you ever attempted to ‘meet minds’ with someone? What sort of skills would you want to single out to practice? What sort of setting seems like it’d work for that?” This section shows two people working through a really serious conflict. It’s a place where their values have come seriously into conflict, and yet, to get more of what they both want, they have to figure out how to cooperate. Also, they’ve both ended up pretty seriously damaged, and they have things they need to untangle.
This is a section called “to earth with science” and… Well, how useful it is depends on how much it’s going to be useful to you to think more critically about the academic/scientific institutions we have on this planet. It’s very much Eliezer doing a psuedo-rant about what’s broken here that echoes the tone of something like Inadequate Equilibria. The major takeaway would be something like the takeaways you get from a piece of accurate satire—the lèse majesté which shatters some of the memes handed down to you by the wiser-than-thou people who grimly say “we know it’s not perfect, but it’s the best we have” and expect you not to have follow-up questions about that type of assertion.
This is my favorite section from “to hell with science.” The entire post is a great lecture about the philosophy and practice of science, but this part in particular touches on a concept I expect to come up in more detail later regarding AIs and agency. One of the cruxes of this whole AI debate is whether you can separate out “intelligence” and “agency”—and this part provides an explanation for why that whole idea is something of a failure to conceptualize these things correctly.
This is Keltham lecturing on responsibility, the design of institutions, and how to critique systems from the lens of someone like a computer programmer. This is where you get some of the juiciest takeaways about Civilization as Eliezer envisions it. The “basic sanity check” of “who is the one person responsible for this” & requisite exception handling is particularly actionable, IMO.
“Learn when/where you can take quick steps and plant your feet on solid ground.” There’s something about feedback loops here, and the right way to start getting good at something. May not be terribly useful to a lot of people, but it stood out as a prescription for people who want to learn something. Invent a method, try to cheat, take a weird shortcut, guess. Then, check whether your results actually work. Don’t go straight for “doing things properly” if you don’t have to.
Keltham on how to arrive at Civilization from first-principles. This is one of the best lectures in the whole series from my perspective. The way it’s framed in the form of a thought-experiment that I could on-board and play with in spare moments.
Hopefully some of these are interesting and useful to you Mir, as well as others here. There’s a ton of other stuff, so I may write a follow-up with more later on if I have more time.
This is awesome, thank you so much! Green leaf indicates that you’re new (or new alias) here? Happy for LW! : )
I first learned this lesson in my youth when, after climbing to the top of a leaderboard in a puzzle game I’d invested >2k hours into, I was surpassed so hard by my nemesis that I had to reflect on what I was doing. Thing is, they didn’t just surpass me and everybody else, but instead continued to break their own records several times over.
Slightly embarrassed by having congratulated myself for my merely-best performance, I had to ask “how does one become like that?”
My problem was that I’d always just been trying to get better than the people around me, whereas their target was the inanimate structure of the problem itself. When I had broken a record, I said “finally!” and considered myself complete. But when they did the same, they said “cool!”, and then kept going. The only way to defeat them, would be by not trying to defeat them, and instead focus on fighting the perceived limits of the game itself.
To some extent, I am what I am today, because I at one point aspired to be better than Aisi.
Two years ago, I didn’t realize that 95% of my effort was aimed at answering what ultimately was other people’s questions. What happens when I learn to aim all my effort on questions purely arising from bottlenecks I notice in my own cognition?
I hate how much time my brain (still) wastes on daydreaming and coming up with sentences optimized for impressing people online. What happens if I instead can learn to align all my social-motivation-based behaviours to what someone would praise if they had all the mental & situational context I have, and who’s harder to fool than myself? Can my behaviour then be maximally aligned with [what I think is good], and [what I think is good] be maximally aligned with my best effort at figuring out what’s good?
I hope so, and that’s what Maria is currently helping me find out.