Not all the same blind spots—I exaggerated and I shouldn’t have.
But in particular, I would expect Bella to have take quite a bit more time to think about the question, will awakening the Quileute really help them? She rightly realizes that the Quileute are in a dangerous and precarious position, but immediately panics and pulls the first lever she can find, instead of thinking for a week or so—or at least a solid hour that the reader sees—about whether that will really make things better.
She should have a pretty significant prior expectation that there’s going to be some major consequences to Awakening, given what happens in the closest analogue she’s experienced, Turning. A newborn vampire is darn hard to restrain—so why should she assume she has a good chance of experimenting on Rachel without being detected?
She doesn’t really strike me as impatient in this way in the first half of the story. I guess it’s plausible that the high stakes spook her into uncharacteristic haste, but really, the fact that Aro could “remember” the Quileute at any time doesn’t quite imply a “something must be done/this is something/therefore this must be done” kind of response.
Of course, she makes this mistake in her own, distinctive personal style. For example, she does actually bother talking with at least one of the people who would be affected before actually going out there. But two of Harry’s most distinctive flaws are Extreme Other-Optimizing and Experimenting Before Thinking, and Bella makes both of these simultaneously by awakening the Quileute.
Actually, it looks to me like that mistake happened because turning shook her out of good habits. She stopped writing journal entries, supposedly because she has perfect memory; but the main benefit of that was consolidating and analyzing thoughts, not preserving them. On top of that, she didn’t consult with anyone, because of the mind-reading issue. She thought vampiric super-memory was a substitute for her old cognitive toolkit, but it wasn’t, so she ended up doing something very stupid.
This kind of insight is why, from a rationality perspective I love the twist in this story. This is so good at showing the causal density of real human systems and the disasters that can come from falsely concluding that you have a causally correct theory about why you won when you win and why you failed when you fail.
She stopped writing journal entries, supposedly because she has perfect memory; but the main benefit of that was consolidating and analyzing thoughts, not preserving them.
How could she have been sure of this? Where would she have needed to direct her rational faculties to pull this hypothesis up out of all the other hypotheses about what went wrong?
It seems plausible that what Bella needed might have been some specific insight applied at or before a specific chapter, but the menu of things it might have helped to adjust is enormous and any particular fix might have had its own negative side effects that we aren’t seeing in the story because they weren’t applied.
For example, one of my own personal heuristics is that I should generally delay any action that has “epistemically irreversible” consequences until I am either (1) forced into the action by external circumstances and the need to “make a bet for survival one way or the other” or (2) I have identified post-change mechanisms that will allow the new situation/framework to identify its own flaws and dismantle itself if it isn’t actually for the best.
Based on this pet theory and post hoc rationalization about Bella, I might argue that the place where Bella went wrong was in becoming a vampire and accepting apparently permanent modifications to her mind despite not being forced into it by a true emergency or verifying that the post-modification state passes the “self critiquing reversibility” test.
As Vaniver pointed out in the previous comment thread, now she appears to be trapped in a Punisher comic book that’s almost certain to have an unhappy ending rather than living in a romance novel. Instead of living for pointless revenge she could have still been flirting with a dangerously hot boy who will magically be a good husband when the relationship is magically made permanent.
Of course, in a rationalist universe where magical thinking runs into implacable reality even the romance novel may have been a bad outcome for luminous!Bella. Romance novels have to stop when they stop, because otherwise the end of the story arc would be about a woman married to a mobster or a sociopathic nobleman or a pirate or (ahem) a vampire, and that is totally not what traditional romance novels are about.
Based on this pet theory and post hoc rationalization about Bella, I might argue that the place where Bella went wrong was in becoming a vampire and accepting apparently permanent modifications to her mind despite not being forced into it by a true emergency or verifying that the post-modification state passes the “self critiquing reversibility” test.
Possibly, but keep in mind she has evidence that this irreversible transition would make her better at improving. Not wanting to become superior because that might make you overconfident is a pretty self-defeating strategy; though constantly checking plans for signs of overconfidence is a good plan. (That is, if she thought about it beforehand and was more self-aware, she would understand journaling is valuable as more than a memory aid, and keep it up or find a substitute as a vampire. But she’d be able to journal / self-critique way better as a vampire than as a human.)
Not wanting to become superior because that might make you overconfident...
...is not what I’m talking about.
The self-critiquing-reversibility test is designed specifically to prevent apparent self improvements which are not actual self improvements and from which you cannot retreat. If the test is passed then it should give you more room to play and explore because you actually have a safety net in the form of a “bailout option”.
The test is designed to prevent you from, for example, getting addicted to a purported nootropic that turns out to be more like crystal meth than like caffeine. Avoiding “belief in the value of irrational belief” is another place where the heuristic might be applied.
For Bella, thing vampires can’t do include turning off their desire for blood, or changing their emotional connection to their mates. These are, in some sense, “permanent utility function tweaks” rather that simple “optimization power upgrades”.
If Harry had applied the test in the first handful of chapters of MoR, he would have asked McGonagall if it was possible for him to explore the wizarding world but then back out somehow if he decided it was better to be a muggle instead of a wizard after educating himself about the costs and benefits of both states. The best answer from McGonagall (though I don’t think she can actually do this, which may be relevant) is “Here, let me take veritaserum… Now… Yes, easily, because memories can be erased with an obliviation spell and returning to a naive state will be basically the same as never having learned about the wizarding world in the first place, but you’ll find that the cost benefit analysis is unambiguously positive because of things like X and Y which appeal to you right now. The biggest downsides are P and Q and similar issues which are obviously negligible in the face of X and Y.”
keep in mind she has evidence that this irreversible transition would make her better at improving
Absolutely. Resilience and naive optimization are often in conflict.
The highest expected value strategy in investing is to put all your money in the single investment that itself has the highest expected value (assuming the opportunity is large enough that your whole contribution doesn’t push the project very far down its marginal utility curve so the last dollar invested will have lower return on investment than some other investment). Nonetheless an index fund can be a better strategy based on variance estimates and more or less sophisticated risk of ruin calculations combined with the value of “avoiding ruin”. Nearly all billionaires are massively “over invested” in their own companies and they frequently stop being billionaires for this very reason. The fortune 500 has substantial turnover decade-over-decade in part because a company has to sacrifice some resilience to get onto that list and in the long run (since corporations are potentially immortal), a lack of resilience catches up to them.
This is what I was trying to get at with the link about causal density. Applying the epistemic reversibility test too diligently can be inferred from first principles to hurt you if you are in a “get big fast” regime where the only survivors are lucky risk takers. Or maybe it can hurt you for some other reason I don’t know about yet that will make more sense to me if I apply it some day and then get hurt in a novel way...
And, honestly speaking, for any given heuristic I consciously apply, I expect to gain some benefit, while generally expecting to get hurt sometimes. If I keep doing novel stuff with an eye towards rational self improvement it seems inevitable that I’ll get hurt in a way I wasn’t expecting—however it seems reasonable to suppose the damage will be limited because I’m on the lookout for it. In working in this area at all I’m either implicitly or explicitly guesstimating that there is an upside to “rationality in general” that beats the downside.
Rationally speaking, it would make sense to make the risks of active rationality cultivation explicit and then subject the the calculation to conscious analysis, and then abandon active rationality cultivation if the expected value is honestly negative. It is precisely the fact that rationality basically demands this kind of bailout analysis at some point that has generally helped me to feel safe(ish) when experimenting with this particular package of memes.
The highest expected value strategy in investing is to put all your money in the single investment that itself has the highest expected value (assuming the opportunity is large enough that your whole contribution doesn’t push the project very far down its marginal utility curve so the last dollar invested will have lower return on investment than some other investment).
I wanted to comment on this example: the benefits to index funds are more than in variance. Trading costs make it a superior long-term strategy to managed funds / researching your own stock picks (the highest expected value investment will change from moment to moment), and the fact that stock prices are not independent means a well-chosen larger subset of stocks will have higher expected value than a poorly-chosen smaller subset of stocks.
To reword that last sentence (and explain what I mean by well-chosen and poorly chosen): if I sort stocks by expected return over the next month and pick the top five, my expected value is worse than or the same as if I also model the effect stock prices have on other stocks, and then pick the set of five stocks whose expected return over the next month is highest, even though I have access to the same set of stocks.
That is to say, the EV of a single stock being highest is due to the rudimentary nature of that EV. You can improve the EV without discarding that method of analysis, and without touching on utility concerns (where risk of ruin comes into play in a big way).
I agree with you that irreversibility should raise giant red flags and suggest that an EU (or however you want to abbreviate expected utility) calculation is a better choice than an EV calculation, and plans which are reversible significantly decrease the risk of ruin. But I think Bella’s overall risk of ruin decreased with the transition to a vampire (and then massively increased with the transition to a revolutionary), and she had good reason to expect that would be the case.
That is interesting to think about, though, the optimal way to manage a transition like that- hmm.
On top of that, she didn’t consult with anyone, because of the mind-reading issue.
I think this is the thing that bothers me most about Bella’s plan. It’s plot-induced stupidity (but I don’t blame Alicorn for it, since Meyer came up with Aro’s power). If Bella vocalized or wrote down her plan, even once, I find it hard to believe she wouldn’t have subconsciously examined its assumptions and been struck by its idiocy. Maybe that benefit is particular to me- I find the moment I try to explain my thoughts to someone, the holes become readily visible in a way they wouldn’t be if I just examined them myself- but I imagine many people experience that.
If Edward weren’t lovestruck (and/or optimistic), he might have warned Bella “look, I trust you, but just in case you’re planning to make Aro, the guy we’ve been talking about, unhappy in any way, he has the experience and the history and the malice necessary to ruin all of our lives. Don’t mess with Aro.”
If Alice weren’t blocked by the La Push shapeshifters, then she might have noticed horrible events on the horizon when Bella decides to activate the werewolves. “Bella, is there a reason I suddenly can’t envision myself?” Not sure about the future-blocking and the temporal range of Alice’s visions, but it seems likely she would note blankness spreading from Bella to everyone and say “hey, Bella, what’s going on?”
It seems likely, from the way it works in canon at least, that Alice would be able to figure out something horrible is going to happen- she sees Bella jump off a cliff (and doesn’t see Jacob catching Bella). Similarly, she might see Bella burning in a pit (but not Leah hunting Bella), but the similarities there are somewhat strained.
(Also, sudden thought: what if the “this one” they killed was Alice? Seems tremendously unlikely- Edward is the only one the follow-up statement makes sense for- but isn’t contradicted by any evidence so far.)
Useful concept. On analysis, I find that a lot of my own stupidity occurs because, if I had thought it out a bit more, my more rational choices would spoil the dramatic narrative that I construct by behaving more intuitively and less rationally.
Edward is the only one the follow-up statement makes sense for
It also makes sense for Irina. She would be dead set on eradicating the very useful new servants of the Volturi, a sufficient reason to kill her. I haven’t read the Twilight books but I think that in canon they kill her for wasting their time.
I’m also fairly sure Irina was the one burned above the pit. Alicorn provided many hints that Edward may still be alive: he is never mentioned after Bella is shredded by the wolves; Bella can find none of his jewellery, in the ashes or anywhere else; Bella’s widowed depression is by comparison far from as extreme as Jasper’s, Marcus’ or even Irina’s.
Bella’s widowed depression is by comparison far from as extreme as Jasper’s, Marcus’ or even Irina’s.
This is not to be taken as evidence for Edward’s survival. Vampires do not have mate-sensing ESP above and beyond their normal ability to detect the world around them.
I was under the impression it was for the more serious offense of bearing false witness, but really, those are just different ways to spin the same thing.
Not all the same blind spots—I exaggerated and I shouldn’t have.
But in particular, I would expect Bella to have take quite a bit more time to think about the question, will awakening the Quileute really help them? She rightly realizes that the Quileute are in a dangerous and precarious position, but immediately panics and pulls the first lever she can find, instead of thinking for a week or so—or at least a solid hour that the reader sees—about whether that will really make things better.
She should have a pretty significant prior expectation that there’s going to be some major consequences to Awakening, given what happens in the closest analogue she’s experienced, Turning. A newborn vampire is darn hard to restrain—so why should she assume she has a good chance of experimenting on Rachel without being detected?
She doesn’t really strike me as impatient in this way in the first half of the story. I guess it’s plausible that the high stakes spook her into uncharacteristic haste, but really, the fact that Aro could “remember” the Quileute at any time doesn’t quite imply a “something must be done/this is something/therefore this must be done” kind of response.
Of course, she makes this mistake in her own, distinctive personal style. For example, she does actually bother talking with at least one of the people who would be affected before actually going out there. But two of Harry’s most distinctive flaws are Extreme Other-Optimizing and Experimenting Before Thinking, and Bella makes both of these simultaneously by awakening the Quileute.
Actually, it looks to me like that mistake happened because turning shook her out of good habits. She stopped writing journal entries, supposedly because she has perfect memory; but the main benefit of that was consolidating and analyzing thoughts, not preserving them. On top of that, she didn’t consult with anyone, because of the mind-reading issue. She thought vampiric super-memory was a substitute for her old cognitive toolkit, but it wasn’t, so she ended up doing something very stupid.
This kind of insight is why, from a rationality perspective I love the twist in this story. This is so good at showing the causal density of real human systems and the disasters that can come from falsely concluding that you have a causally correct theory about why you won when you win and why you failed when you fail.
How could she have been sure of this? Where would she have needed to direct her rational faculties to pull this hypothesis up out of all the other hypotheses about what went wrong?
It seems plausible that what Bella needed might have been some specific insight applied at or before a specific chapter, but the menu of things it might have helped to adjust is enormous and any particular fix might have had its own negative side effects that we aren’t seeing in the story because they weren’t applied.
For example, one of my own personal heuristics is that I should generally delay any action that has “epistemically irreversible” consequences until I am either (1) forced into the action by external circumstances and the need to “make a bet for survival one way or the other” or (2) I have identified post-change mechanisms that will allow the new situation/framework to identify its own flaws and dismantle itself if it isn’t actually for the best.
Based on this pet theory and post hoc rationalization about Bella, I might argue that the place where Bella went wrong was in becoming a vampire and accepting apparently permanent modifications to her mind despite not being forced into it by a true emergency or verifying that the post-modification state passes the “self critiquing reversibility” test.
As Vaniver pointed out in the previous comment thread, now she appears to be trapped in a Punisher comic book that’s almost certain to have an unhappy ending rather than living in a romance novel. Instead of living for pointless revenge she could have still been flirting with a dangerously hot boy who will magically be a good husband when the relationship is magically made permanent.
Of course, in a rationalist universe where magical thinking runs into implacable reality even the romance novel may have been a bad outcome for luminous!Bella. Romance novels have to stop when they stop, because otherwise the end of the story arc would be about a woman married to a mobster or a sociopathic nobleman or a pirate or (ahem) a vampire, and that is totally not what traditional romance novels are about.
Possibly, but keep in mind she has evidence that this irreversible transition would make her better at improving. Not wanting to become superior because that might make you overconfident is a pretty self-defeating strategy; though constantly checking plans for signs of overconfidence is a good plan. (That is, if she thought about it beforehand and was more self-aware, she would understand journaling is valuable as more than a memory aid, and keep it up or find a substitute as a vampire. But she’d be able to journal / self-critique way better as a vampire than as a human.)
...is not what I’m talking about.
The self-critiquing-reversibility test is designed specifically to prevent apparent self improvements which are not actual self improvements and from which you cannot retreat. If the test is passed then it should give you more room to play and explore because you actually have a safety net in the form of a “bailout option”.
The test is designed to prevent you from, for example, getting addicted to a purported nootropic that turns out to be more like crystal meth than like caffeine. Avoiding “belief in the value of irrational belief” is another place where the heuristic might be applied.
For Bella, thing vampires can’t do include turning off their desire for blood, or changing their emotional connection to their mates. These are, in some sense, “permanent utility function tweaks” rather that simple “optimization power upgrades”.
If Harry had applied the test in the first handful of chapters of MoR, he would have asked McGonagall if it was possible for him to explore the wizarding world but then back out somehow if he decided it was better to be a muggle instead of a wizard after educating himself about the costs and benefits of both states. The best answer from McGonagall (though I don’t think she can actually do this, which may be relevant) is “Here, let me take veritaserum… Now… Yes, easily, because memories can be erased with an obliviation spell and returning to a naive state will be basically the same as never having learned about the wizarding world in the first place, but you’ll find that the cost benefit analysis is unambiguously positive because of things like X and Y which appeal to you right now. The biggest downsides are P and Q and similar issues which are obviously negligible in the face of X and Y.”
Absolutely. Resilience and naive optimization are often in conflict.
The highest expected value strategy in investing is to put all your money in the single investment that itself has the highest expected value (assuming the opportunity is large enough that your whole contribution doesn’t push the project very far down its marginal utility curve so the last dollar invested will have lower return on investment than some other investment). Nonetheless an index fund can be a better strategy based on variance estimates and more or less sophisticated risk of ruin calculations combined with the value of “avoiding ruin”. Nearly all billionaires are massively “over invested” in their own companies and they frequently stop being billionaires for this very reason. The fortune 500 has substantial turnover decade-over-decade in part because a company has to sacrifice some resilience to get onto that list and in the long run (since corporations are potentially immortal), a lack of resilience catches up to them.
This is what I was trying to get at with the link about causal density. Applying the epistemic reversibility test too diligently can be inferred from first principles to hurt you if you are in a “get big fast” regime where the only survivors are lucky risk takers. Or maybe it can hurt you for some other reason I don’t know about yet that will make more sense to me if I apply it some day and then get hurt in a novel way...
And, honestly speaking, for any given heuristic I consciously apply, I expect to gain some benefit, while generally expecting to get hurt sometimes. If I keep doing novel stuff with an eye towards rational self improvement it seems inevitable that I’ll get hurt in a way I wasn’t expecting—however it seems reasonable to suppose the damage will be limited because I’m on the lookout for it. In working in this area at all I’m either implicitly or explicitly guesstimating that there is an upside to “rationality in general” that beats the downside.
Rationally speaking, it would make sense to make the risks of active rationality cultivation explicit and then subject the the calculation to conscious analysis, and then abandon active rationality cultivation if the expected value is honestly negative. It is precisely the fact that rationality basically demands this kind of bailout analysis at some point that has generally helped me to feel safe(ish) when experimenting with this particular package of memes.
I wanted to comment on this example: the benefits to index funds are more than in variance. Trading costs make it a superior long-term strategy to managed funds / researching your own stock picks (the highest expected value investment will change from moment to moment), and the fact that stock prices are not independent means a well-chosen larger subset of stocks will have higher expected value than a poorly-chosen smaller subset of stocks.
To reword that last sentence (and explain what I mean by well-chosen and poorly chosen): if I sort stocks by expected return over the next month and pick the top five, my expected value is worse than or the same as if I also model the effect stock prices have on other stocks, and then pick the set of five stocks whose expected return over the next month is highest, even though I have access to the same set of stocks.
That is to say, the EV of a single stock being highest is due to the rudimentary nature of that EV. You can improve the EV without discarding that method of analysis, and without touching on utility concerns (where risk of ruin comes into play in a big way).
I agree with you that irreversibility should raise giant red flags and suggest that an EU (or however you want to abbreviate expected utility) calculation is a better choice than an EV calculation, and plans which are reversible significantly decrease the risk of ruin. But I think Bella’s overall risk of ruin decreased with the transition to a vampire (and then massively increased with the transition to a revolutionary), and she had good reason to expect that would be the case.
That is interesting to think about, though, the optimal way to manage a transition like that- hmm.
I think this is the thing that bothers me most about Bella’s plan. It’s plot-induced stupidity (but I don’t blame Alicorn for it, since Meyer came up with Aro’s power). If Bella vocalized or wrote down her plan, even once, I find it hard to believe she wouldn’t have subconsciously examined its assumptions and been struck by its idiocy. Maybe that benefit is particular to me- I find the moment I try to explain my thoughts to someone, the holes become readily visible in a way they wouldn’t be if I just examined them myself- but I imagine many people experience that.
If Edward weren’t lovestruck (and/or optimistic), he might have warned Bella “look, I trust you, but just in case you’re planning to make Aro, the guy we’ve been talking about, unhappy in any way, he has the experience and the history and the malice necessary to ruin all of our lives. Don’t mess with Aro.”
If Alice weren’t blocked by the La Push shapeshifters, then she might have noticed horrible events on the horizon when Bella decides to activate the werewolves. “Bella, is there a reason I suddenly can’t envision myself?” Not sure about the future-blocking and the temporal range of Alice’s visions, but it seems likely she would note blankness spreading from Bella to everyone and say “hey, Bella, what’s going on?”
It seems likely, from the way it works in canon at least, that Alice would be able to figure out something horrible is going to happen- she sees Bella jump off a cliff (and doesn’t see Jacob catching Bella). Similarly, she might see Bella burning in a pit (but not Leah hunting Bella), but the similarities there are somewhat strained.
(Also, sudden thought: what if the “this one” they killed was Alice? Seems tremendously unlikely- Edward is the only one the follow-up statement makes sense for- but isn’t contradicted by any evidence so far.)
Upvoted for the phrase “plot-induced stupidity”.
Useful concept. On analysis, I find that a lot of my own stupidity occurs because, if I had thought it out a bit more, my more rational choices would spoil the dramatic narrative that I construct by behaving more intuitively and less rationally.
Plot Induced Stupidity is a trope.
Dooohhh!
Does that mean I should take back my upvote? But it seemed so right to make a big deal of how cool the phrase was.
It also makes sense for Irina. She would be dead set on eradicating the very useful new servants of the Volturi, a sufficient reason to kill her. I haven’t read the Twilight books but I think that in canon they kill her for wasting their time.
I’m also fairly sure Irina was the one burned above the pit. Alicorn provided many hints that Edward may still be alive: he is never mentioned after Bella is shredded by the wolves; Bella can find none of his jewellery, in the ashes or anywhere else; Bella’s widowed depression is by comparison far from as extreme as Jasper’s, Marcus’ or even Irina’s.
He only had the one ring.
This is not to be taken as evidence for Edward’s survival. Vampires do not have mate-sensing ESP above and beyond their normal ability to detect the world around them.
I was under the impression it was for the more serious offense of bearing false witness, but really, those are just different ways to spin the same thing.
Ooh, good point. Objection withdrawn.