I could not comment on Substack itself. It presents me with a CAPTCHA where I have to prove I am human by demonstrating qualia. As a philosophical zombie, I believe this is discriminatory but must acknowledge that no one of ethical consequence is being harmed. Rather than fight this non-injustice, I am simply posting on the obsolete Less Wrong 2.0 instead.
Here are my thoughts on the new posts.
HPMOR: The Epilogue was surprising yet inevitable. It is hard to say more without spoiling it.
My favorite part of all the new posts is Scott Alexander’s prescient “war to end all wars”. Now would be a great time to apply his insights to betting markets if they weren’t all doomed. The “sticks and stones” approach to mutually-assured destruction was a stroke of genius.
I reluctantly acknowledge that introductory curations of established knowledge are a necessity for mortals. Luke Muelhauser’s explanation is old hat if you have been keeping up with the literature for the last five millennia.
You can judge Galef’s book by a glance at the cover.
I am looking forward to Gwern’s follow-up post on where the multiversal reintegrator came from.
Robin Hanson is correct. LessWrong’s objective has shifted. Our priority these days is destroying the world, which we practice ritualistically every September 26th.
I’m deeply confused by the cycle of references. What order were these written in?
In the HPMOR epilogue, Dobby (and Harry to a lesser extent) solve most of the worlds’ problems using the 7 step method Scott Alexander outlines in “Killing Moloch” (ending with of course with the “war to end all wars”). This strongly suggests that the HPMOR epilogue was written after “Killing Moloch”.
However, “Killing Moloch” extensively quotes Muehlhauser’s “Solution to the Hard Problem of Consciousness”. (Very extensively. Yes Scott, you solved coordination problems, and describe in detail how to kill Moloch. But you didn’t have to go on that long about it. Way more than I wanted to know.) In fact, I don’t think the Killing Moloch approach would work at all if not for the immediate dissolution of aphrasia one gains upon reading Muehlhauser’s Solution.
And Muehlhauser uses Julia Galef’s “Infallible Technique for Maintaining a Scout Mindset” to do his 23 literature reviews, which as far as I know was only distilled down in her substack post. (It seems like most of the previous failures to solve the Hard Problem boiled down to subtle soldier mindset creep, that was kept at bay by the Infallible Technique.)
And finally, in the prologue, Julia Galef said she only realized it might be possible to compress her entire book into a short blog post with no content loss whatsoever after seeing how much was hidden in plain sight in HPMOR (because of just how inevitable the entire epilogue is once you see it).
So what order could these possibly have been written in?
Julia, Luke, Scott, and Eliezer know each other very well.
Exactly three months ago, they all happened to consult their mental simulations of each other for advice on their respective problems, at the same time.
Recognizing the recursion that would result if they all simulated each other simulating each other simulating each other… etc, they instead searched over logically-consistent universe histories, grading each one by expected utility.
Since each of the four has a slightly different utility function, they of course acausally negotiated a high-utility compromise universe-history.
This compromise history involves seemingly acausal blog post attribution cycles. There’s no (in-universe, causal) reason why those effects are there. It’s just the history that got selected.
The moral of the story is: by mastering rationality and becoming Not Wrong like we are today, you can simulate your friends to arbitrary precision. This saves you anywhere between $15-100/month on cell phone bills.
I only read the HPMOR epilogue because—let’s be honest—HPMOR is what LessWrong is really for.
(HPMOR spoilers ahead)
Honestly, although I liked the scene with Harry and Dumbledore, I would have preferred Headmaster Dobby not be present.
I now feel bad for thinking Ron was dumb for liking Quidditch so much. But with hindsight, you can see his benevolent influence guiding events in literally every single scene. Literally. It was like a lorry missed you and your friends and your entire planet by centimetres—simply because someone threw a Snitch at someone in just the right way to get them to throw a pebble at the lorry in just the right way so that it barely misses you.
I liked the part where he had access to an even more ancient hall of meta-prophecies in the department of mysteries of the department of mysteries.
As far as I can tell, all prophecies are now complete. I wonder if the baby will be named Lucius?
Oh, another thing: I think it was pretty silly that Eliezer had Harry & co infer the existence of the AI alignment problem and then have Harry solve the inner alignment problem.
That plot point needlessly delayed the epilogue while we waited for Eliezer to solve inner alignment for the story’s sake.
It was pretty mean of Eliezer to spoil that problem’s solution. Some of us were having fun thinking about it on our own, thanks.
Just a public warning that the version of Scott’s article that was leaked at SneerClub was modified to actually maximize human suffering. But I guess no one is surprised. Read the original version.
HPMOR—obvious in hindsight; the rational Harry Potter has never [EDIT: removed spoiler, sorry]. Yet, most readers somehow missed the clues, myself included.
I laughed a lot at Gwern’s “this rationality technique does not exist” examples.
On the negative side, most of the comment sections are derailed into discussing Bitcoin prices. Sigh. Seriously, could we please focus on the big picture for a moment? This is practically LessWrong 3.0, and you guys are nitpicking as usual.
I could not comment on Substack itself. It presents me with a CAPTCHA where I have to prove I am human by demonstrating qualia. As a philosophical zombie, I believe this is discriminatory but must acknowledge that no one of ethical consequence is being harmed. Rather than fight this non-injustice, I am simply posting on the obsolete Less Wrong 2.0 instead.
Here are my thoughts on the new posts.
HPMOR: The Epilogue was surprising yet inevitable. It is hard to say more without spoiling it.
My favorite part of all the new posts is Scott Alexander’s prescient “war to end all wars”. Now would be a great time to apply his insights to betting markets if they weren’t all doomed. The “sticks and stones” approach to mutually-assured destruction was a stroke of genius.
I reluctantly acknowledge that introductory curations of established knowledge are a necessity for mortals. Luke Muelhauser’s explanation is old hat if you have been keeping up with the literature for the last five millennia.
You can judge Galef’s book by a glance at the cover.
I am looking forward to Gwern’s follow-up post on where the multiversal reintegrator came from.
Robin Hanson is correct. LessWrong’s objective has shifted. Our priority these days is destroying the world, which we practice ritualistically every September 26th.
Thank you. I would greatly enjoy more people sharing their takeaways from reading the posts.
I’m deeply confused by the cycle of references. What order were these written in?
In the HPMOR epilogue, Dobby (and Harry to a lesser extent) solve most of the worlds’ problems using the 7 step method Scott Alexander outlines in “Killing Moloch” (ending with of course with the “war to end all wars”). This strongly suggests that the HPMOR epilogue was written after “Killing Moloch”.
However, “Killing Moloch” extensively quotes Muehlhauser’s “Solution to the Hard Problem of Consciousness”. (Very extensively. Yes Scott, you solved coordination problems, and describe in detail how to kill Moloch. But you didn’t have to go on that long about it. Way more than I wanted to know.) In fact, I don’t think the Killing Moloch approach would work at all if not for the immediate dissolution of aphrasia one gains upon reading Muehlhauser’s Solution.
And Muehlhauser uses Julia Galef’s “Infallible Technique for Maintaining a Scout Mindset” to do his 23 literature reviews, which as far as I know was only distilled down in her substack post. (It seems like most of the previous failures to solve the Hard Problem boiled down to subtle soldier mindset creep, that was kept at bay by the Infallible Technique.)
And finally, in the prologue, Julia Galef said she only realized it might be possible to compress her entire book into a short blog post with no content loss whatsoever after seeing how much was hidden in plain sight in HPMOR (because of just how inevitable the entire epilogue is once you see it).
So what order could these possibly have been written in?
I think it’s pretty obvious.
Julia, Luke, Scott, and Eliezer know each other very well.
Exactly three months ago, they all happened to consult their mental simulations of each other for advice on their respective problems, at the same time.
Recognizing the recursion that would result if they all simulated each other simulating each other simulating each other… etc, they instead searched over logically-consistent universe histories, grading each one by expected utility.
Since each of the four has a slightly different utility function, they of course acausally negotiated a high-utility compromise universe-history.
This compromise history involves seemingly acausal blog post attribution cycles. There’s no (in-universe, causal) reason why those effects are there. It’s just the history that got selected.
The moral of the story is: by mastering rationality and becoming Not Wrong like we are today, you can simulate your friends to arbitrary precision. This saves you anywhere between $15-100/month on cell phone bills.
(absolutely great use of that link)
(and brilliant point about cell phone bills)
I only read the HPMOR epilogue because—let’s be honest—HPMOR is what LessWrong is really for.
(HPMOR spoilers ahead)
Honestly, although I liked the scene with Harry and Dumbledore, I would have preferred Headmaster Dobby not be present.
I now feel bad for thinking Ron was dumb for liking Quidditch so much. But with hindsight, you can see his benevolent influence guiding events in literally every single scene. Literally. It was like a lorry missed you and your friends and your entire planet by centimetres—simply because someone threw a Snitch at someone in just the right way to get them to throw a pebble at the lorry in just the right way so that it barely misses you.
I liked the part where he had access to an even more ancient hall of meta-prophecies in the department of mysteries of the department of mysteries.
As far as I can tell, all prophecies are now complete. I wonder if the baby will be named Lucius?
Oh, another thing: I think it was pretty silly that Eliezer had Harry & co infer the existence of the AI alignment problem and then have Harry solve the inner alignment problem.
That plot point needlessly delayed the epilogue while we waited for Eliezer to solve inner alignment for the story’s sake.
It was pretty mean of Eliezer to spoil that problem’s solution. Some of us were having fun thinking about it on our own, thanks.
Just a public warning that the version of Scott’s article that was leaked at SneerClub was modified to actually maximize human suffering. But I guess no one is surprised. Read the original version.
HPMOR—obvious in hindsight; the rational Harry Potter has never [EDIT: removed spoiler, sorry]. Yet, most readers somehow missed the clues, myself included.
I laughed a lot at Gwern’s “this rationality technique does not exist” examples.
On the negative side, most of the comment sections are derailed into discussing Bitcoin prices. Sigh. Seriously, could we please focus on the big picture for a moment? This is practically LessWrong 3.0, and you guys are nitpicking as usual.