Let me bore y’all with a bit of my puzzle-solving MoR.
Near the end of 2010 I came up with a theory (which I codenamed NBT), some far-out thing that explained a few observations, and removed a good deal of confusion. As an aside it also led me to predict the existence of a certain clue—more on that later. Also, I think it may be endgame-relevant.
Since this theory was kinda subtle, I felt I needed more data on it, so I filed it away and waited. Somewhere along the way I realized that it explained even more things than I thought, thus raising my confidence in it (retrograde prediction? - my surprise about it is evidence to me, even if it wouldn’t convince anyone else), though of course it could still be subject to positive bias, as I was well aware.
Enter the final batch of SA arc, late August and early September (2011). It contained an unlikely event, one I would have bet against seeing, without NBT. With it, it was just the kind of thing I would expect. So I marked NBT confirmed. Shortly afterwards a random re-reading of an older part led me to discover the clue I predicted earlier, an innocent little sentence I completely missed before, one that contains a hint connected to the nature of magic. As it happens, this clue also explains how the Interdict of Merlin was made—the mechanism.
Thus feeling my theory NBT strongly confirmed I confronted the one part I was still confused about, and within ten minutes I moved from ‘a bit tricky but possible’ to ‘simple, elegant, neat’. I didn’t need any knowledge that wasn’t available when I came up with NBT, so I could have had that answer a lot earlier—I’ll call the crucial bit of understanding V-factor.
(Also, Self Actualization confirmed one more prediction of mine. I suppose for others that particular bit of noticing-your-confusion could be a hint for a difference-from-canon mystery that didn’t lead anywhere; for me, it was evidence I was right all along about one more thing. This theory was required for later, but I’m sure I wouldn’t have needed the confirmation; I was pretty sure of this one. Just mentioning that to show why I’m glad SA happened.)
All this reminded me of the moral of That Alien Message, specifically how limiting it is to think ‘we need more information’ and do nothing, instead of actually sitting down, and confronting the mystery. On one hand I saw things most would say can’t possibly be seen, on the other hand I failed to find out things fast enough because I didn’t think I could possibly have seen them.
Because I’m not perfect (yet), I didn’t update on everything immediately; to be more precise, it wasn’t until January of 2012 that I applied the V-factor to my previous theories of the nature of MoR magic, but when I did, I became unconfused. The First Cause of Verresverse Magic became clear. Looking back, and seeing that I could have had the whole thing a year earlier if I tried—not hindsight, the conclusions that led to it are clearly traceable—really drove the point home.
My point with all this? Methods of Rationality is full of subtle puzzles, reasons beneath the obvious, as others have pointed out in much less word, and also plenty of ways to practice and test your rationality skills. Noticing confusion, generating alternative hypotheses, countering positive bias, above all. And yes, these are well constructed puzzles, not like the messy reality, but the point is to have the grooves worn into the mind.
Nitpicking the story, complaining about things we wouldn’t even notice from other authors, holding Eliezer to higher standards, because ‘Hey, we’re rational(ists)!’ while failing to notice that these may well be deliberate hints of something deeper … it’s … well … kinda stupid, ain’t it? Like how sophisticated fan-fiction readers might pattern-match Harry of early chapters as Mary Sue, while failing to notice all the clues about his coldness, concluding it is a sign of incompetent author, when it is in fact a hint of something more going on. A useful lesson about how noticing confusion and generating an alternative hypothesis might be needed, at best.
Also, if you have been reading LW for a while and have been passionate about improving your rationality, yet you failed to notice so far how good an opportunity MoR is … I just spoiled it for you; it was a test, and you failed it. Sorry.
Don’t be uncharitable. He’s providing clues (initials and such) that are designed to show he has a specific theory figured out, so we can see afterwards if he was right or wrong, but without spoiling the mystery for the rest of us. (And despite some people who occasionally claim that spoiling a mystery doesn’t “spoil” anyone’s pleasure, some of us strongly prefer not to be spoiled nonetheless)
His post therefore had positive utility for me, as he intrigued me in a fun way, so I upvoted his comment.
On my next reread (a few days hence I guess) I’ll try to also figure out what he figured out. If I figure it out, I’ll probably provide a few password-cyphered sentences to show I’ve also figured it out, and provide the key only after it’s confirmed or not.
His post therefore had positive utility for me, as he intrigued me in a fun way, so I upvoted his comment
This has been oscillating between −1 and −3 for a while now. That is, while some people seem to find it intriguing, some others (and these are in a majority by some small margin) find it nothing but annoying. (At least, that was my reason for downvoting… I’m generalising from one example here.)
And I find it annoying because it’s a long-winded chronology of his epiphanies. It’s like a chronology of a war where you don’t know who’s fighting who over what or even which century the war is taking place: totally pointless.
All the positive utility in this post could be generated by saying “Hey, I thought of this awesome theory (codename: NBT) that explains everything in MoR. Could just be positive bias, but I doubt it. Explained an observation in SA that I considered really unlikely. But it’s not just SA, it explains everything in every other arc too!!” That’s it, the end.
But if that’s what he wanted to say, he could have said it in the Discussion Thread. No, that was just something he was trying to get out of his system. The point he was trying to make was “HPMoR is an awesome fic and you guys have no business criticising its surface deficiencies.”
Which he could have made without the long digression into NBT.
And then there’s the whole implicit “How dare you enjoy this fic in a way other than my way of enjoying it?!” business.
(major likes solving puzzles? Good for him. Some of us like thinking about the worldbuilding/ witnessing the awesome/ thinking about the character dynamics. It’s not sacrilege to say “meh” to the various hints implanted in the story if you’re more interested in looking for the tentacle rape.)
(And despite some people who occasionally claim that spoiling a mystery doesn’t “spoil” anyone’s pleasure, some of us strongly prefer not to be spoiled nonetheless)
Fair enough, but there’s this awesome tech called rot13 which allows you to have your cake and eat it too. I hope you realise that there do exist some people who’d rather know the broad strokes of the plot in advance.
But if you’d rather not share, that’s OK too. Just don’t post a 400-word essay on finding out about it. Not when a 50-word paragraph does the trick.
No, that was just something he was trying to get out of his system.
You got that right. After this thing becomes relevant in the story, there may be complains, and while I’m sure Eliezer will explain it competently, people are prone to throwing accusations of asspull around. So if there is enough detail in a time-logged, non-edited comment, it can be pointed out later. ‘It’s not an asspull, look, this idiot even guessed it. You can read about the stuff he used to do it in the Sequences.’
Now that I no longer feel like I’m doing LW a disservice by sitting on such validation, I can go back to speculating quietly.
As for the rest, I guess you’re right, though I note that’s not why youstarted this thread. Also of note is the fact that this is LessWrong, not some random ff site. Getting things right might be higher priority here, than elsewhere.
you said you spoiled it, but I notice no actual description of NBT
What he actually said was
if you have been reading LW for a while and have been passionate about improving your rationality, yet you failed to notice so far how good an opportunity MoR is … I just spoiled it for you
Ergo, spotting that MoR is awesome was a test, and we flunked it. Because apparently we failed to notice that MoR is awesome.
What he’s claiming isn’t that the rest of us failed to notice that MoR is awesome, but that the rest of us failed to notice that MoR is awesome as a series of puzzle-solving rationality exercises.
did you intend to skip actually telling us what NBT is?
Of course. The hard part about noticing your confusion isn’t recognizing it, when it is pointed out. It’s, you know, the noticing part. I tell you, I get a few points of karma for it, maybe, and everyone looses the opportunity to do it for themselves. Now, that’s negative sum!
I think one thing that keeps people from asking questions is the flinching from the uncertainty that may never get resolved. But that’s clearly not the case with MoR (unless Eliezer is evil, and his puzzles will never be resolved in-story). These are well stuctured puzzles leading us along (hell I’m identifying puzzle arcs), and we just have to make some effort.
I guess my sense of justice doesn’t like how something deep gets complains about surface stuff, when that surface can in fact be justified by the deeper stuff. Stupid sense of justice.
did you intend to skip actually telling us what NBT is?
Of course. The hard part about noticing your confusion isn’t recognizing it, when it is pointed out. It’s, you know, the noticing part. I tell you, I get a few points of karma for it, maybe, and everyone looses the opportunity to do it for themselves. Now, that’s negative sum!
No, it just looks annoying. If you really wanted to prod thought, you’d offer the proverbial ‘hostage to fortune’ in the form of a hash precommitment to your NBT theory so you have verifiably expressed a particular theory well in advance of MoR ending and have exposed yourself to public ridicule in the event your theory is laughably wrong or you had none at all; knowing that you now have something at stake, people might take you more seriously.
Nobody’s infallible. What kind of rationalist approaches a work with a couple holes in it and leaves under the assumption that the writer is simply so perfect that everything will be addressed and wrapped up neatly by the end?
Isn’t it far more likely that in writing a fantastic teaching tool/test, Eliezer occasionally leaves a thread or two hanging?
It’s a bit long, sorry.
Let me bore y’all with a bit of my puzzle-solving MoR.
Near the end of 2010 I came up with a theory (which I codenamed NBT), some far-out thing that explained a few observations, and removed a good deal of confusion. As an aside it also led me to predict the existence of a certain clue—more on that later. Also, I think it may be endgame-relevant.
Since this theory was kinda subtle, I felt I needed more data on it, so I filed it away and waited. Somewhere along the way I realized that it explained even more things than I thought, thus raising my confidence in it (retrograde prediction? - my surprise about it is evidence to me, even if it wouldn’t convince anyone else), though of course it could still be subject to positive bias, as I was well aware.
Enter the final batch of SA arc, late August and early September (2011). It contained an unlikely event, one I would have bet against seeing, without NBT. With it, it was just the kind of thing I would expect. So I marked NBT confirmed. Shortly afterwards a random re-reading of an older part led me to discover the clue I predicted earlier, an innocent little sentence I completely missed before, one that contains a hint connected to the nature of magic. As it happens, this clue also explains how the Interdict of Merlin was made—the mechanism.
Thus feeling my theory NBT strongly confirmed I confronted the one part I was still confused about, and within ten minutes I moved from ‘a bit tricky but possible’ to ‘simple, elegant, neat’. I didn’t need any knowledge that wasn’t available when I came up with NBT, so I could have had that answer a lot earlier—I’ll call the crucial bit of understanding V-factor.
(Also, Self Actualization confirmed one more prediction of mine. I suppose for others that particular bit of noticing-your-confusion could be a hint for a difference-from-canon mystery that didn’t lead anywhere; for me, it was evidence I was right all along about one more thing. This theory was required for later, but I’m sure I wouldn’t have needed the confirmation; I was pretty sure of this one. Just mentioning that to show why I’m glad SA happened.)
All this reminded me of the moral of That Alien Message, specifically how limiting it is to think ‘we need more information’ and do nothing, instead of actually sitting down, and confronting the mystery. On one hand I saw things most would say can’t possibly be seen, on the other hand I failed to find out things fast enough because I didn’t think I could possibly have seen them.
Because I’m not perfect (yet), I didn’t update on everything immediately; to be more precise, it wasn’t until January of 2012 that I applied the V-factor to my previous theories of the nature of MoR magic, but when I did, I became unconfused. The First Cause of Verresverse Magic became clear. Looking back, and seeing that I could have had the whole thing a year earlier if I tried—not hindsight, the conclusions that led to it are clearly traceable—really drove the point home.
My point with all this? Methods of Rationality is full of subtle puzzles, reasons beneath the obvious, as others have pointed out in much less word, and also plenty of ways to practice and test your rationality skills. Noticing confusion, generating alternative hypotheses, countering positive bias, above all. And yes, these are well constructed puzzles, not like the messy reality, but the point is to have the grooves worn into the mind.
Nitpicking the story, complaining about things we wouldn’t even notice from other authors, holding Eliezer to higher standards, because ‘Hey, we’re rational(ists)!’ while failing to notice that these may well be deliberate hints of something deeper … it’s … well … kinda stupid, ain’t it? Like how sophisticated fan-fiction readers might pattern-match Harry of early chapters as Mary Sue, while failing to notice all the clues about his coldness, concluding it is a sign of incompetent author, when it is in fact a hint of something more going on. A useful lesson about how noticing confusion and generating an alternative hypothesis might be needed, at best.
Also, if you have been reading LW for a while and have been passionate about improving your rationality, yet you failed to notice so far how good an opportunity MoR is … I just spoiled it for you; it was a test, and you failed it. Sorry.
tl;dr = I figured out why Dumbledore set fire to a chicken, but I’m not going to tell you.
Don’t be uncharitable. He’s providing clues (initials and such) that are designed to show he has a specific theory figured out, so we can see afterwards if he was right or wrong, but without spoiling the mystery for the rest of us. (And despite some people who occasionally claim that spoiling a mystery doesn’t “spoil” anyone’s pleasure, some of us strongly prefer not to be spoiled nonetheless)
His post therefore had positive utility for me, as he intrigued me in a fun way, so I upvoted his comment.
On my next reread (a few days hence I guess) I’ll try to also figure out what he figured out. If I figure it out, I’ll probably provide a few password-cyphered sentences to show I’ve also figured it out, and provide the key only after it’s confirmed or not.
This has been oscillating between −1 and −3 for a while now. That is, while some people seem to find it intriguing, some others (and these are in a majority by some small margin) find it nothing but annoying. (At least, that was my reason for downvoting… I’m generalising from one example here.)
And I find it annoying because it’s a long-winded chronology of his epiphanies. It’s like a chronology of a war where you don’t know who’s fighting who over what or even which century the war is taking place: totally pointless.
All the positive utility in this post could be generated by saying “Hey, I thought of this awesome theory (codename: NBT) that explains everything in MoR. Could just be positive bias, but I doubt it. Explained an observation in SA that I considered really unlikely. But it’s not just SA, it explains everything in every other arc too!!” That’s it, the end.
But if that’s what he wanted to say, he could have said it in the Discussion Thread. No, that was just something he was trying to get out of his system. The point he was trying to make was “HPMoR is an awesome fic and you guys have no business criticising its surface deficiencies.”
Which he could have made without the long digression into NBT.
And then there’s the whole implicit “How dare you enjoy this fic in a way other than my way of enjoying it?!” business.
(major likes solving puzzles? Good for him. Some of us like thinking about the worldbuilding/ witnessing the awesome/ thinking about the character dynamics. It’s not sacrilege to say “meh” to the various hints implanted in the story if you’re more interested in looking for the tentacle rape.)
Fair enough, but there’s this awesome tech called rot13 which allows you to have your cake and eat it too. I hope you realise that there do exist some people who’d rather know the broad strokes of the plot in advance.
But if you’d rather not share, that’s OK too. Just don’t post a 400-word essay on finding out about it. Not when a 50-word paragraph does the trick.
You got that right. After this thing becomes relevant in the story, there may be complains, and while I’m sure Eliezer will explain it competently, people are prone to throwing accusations of asspull around. So if there is enough detail in a time-logged, non-edited comment, it can be pointed out later. ‘It’s not an asspull, look, this idiot even guessed it. You can read about the stuff he used to do it in the Sequences.’
Now that I no longer feel like I’m doing LW a disservice by sitting on such validation, I can go back to speculating quietly.
As for the rest, I guess you’re right, though I note that’s not why you started this thread. Also of note is the fact that this is LessWrong, not some random ff site. Getting things right might be higher priority here, than elsewhere.
Um.. The initials are not clues, NBT is just how I refered to the theory (Next Big Thing). Sorry about any confusion this might have caused.
Um… You kept talking about how you kept finding confirmation, but… did you intend to skip actually telling us what NBT is?
(ie, you said you spoiled it, but I notice no actual description of NBT or which specific bits of evidence you used.)
What he actually said was
Ergo, spotting that MoR is awesome was a test, and we flunked it. Because apparently we failed to notice that MoR is awesome.
… or something like that.
What he’s claiming isn’t that the rest of us failed to notice that MoR is awesome, but that the rest of us failed to notice that MoR is awesome as a series of puzzle-solving rationality exercises.
Of course. The hard part about noticing your confusion isn’t recognizing it, when it is pointed out. It’s, you know, the noticing part. I tell you, I get a few points of karma for it, maybe, and everyone looses the opportunity to do it for themselves. Now, that’s negative sum!
I think one thing that keeps people from asking questions is the flinching from the uncertainty that may never get resolved. But that’s clearly not the case with MoR (unless Eliezer is evil, and his puzzles will never be resolved in-story). These are well stuctured puzzles leading us along (hell I’m identifying puzzle arcs), and we just have to make some effort.
I guess my sense of justice doesn’t like how something deep gets complains about surface stuff, when that surface can in fact be justified by the deeper stuff. Stupid sense of justice.
No, it just looks annoying. If you really wanted to prod thought, you’d offer the proverbial ‘hostage to fortune’ in the form of a hash precommitment to your NBT theory so you have verifiably expressed a particular theory well in advance of MoR ending and have exposed yourself to public ridicule in the event your theory is laughably wrong or you had none at all; knowing that you now have something at stake, people might take you more seriously.
Nobody’s infallible. What kind of rationalist approaches a work with a couple holes in it and leaves under the assumption that the writer is simply so perfect that everything will be addressed and wrapped up neatly by the end? Isn’t it far more likely that in writing a fantastic teaching tool/test, Eliezer occasionally leaves a thread or two hanging?