My biggest issue with the fic so far is that the bizzare way Alice’s precognition interacts with peoples decisions (I have no idea whether it works like that in the original novels since I haven’t read a word from them) doesn’t seem to puzzle Bella at all.
If people are just a physical system like any other Alice’s powers should be able to predict them just like any other physical system. Since they are strangely unpredictable for her exploring what other physical systems are similarly unpredictable should provide information on how people work. For instance if Alice can’t predict anything influenced by quantum events that would be a big clue that quantum events are involved in cognition. if she can’t predict anything sufficiently chaotic (in the chaos theory sense) that would give away how chaotic people’s decision making is.
If nothing inanimate poses similar difficulties that points towards either a psychological limitation she might be able to break out of, or the existence of some sort of magical free will in that universe. Do any animals have it? In addition to humans vampires obviously also do. And pointing this out to Edward (who apparently believes in the existence of souls, but also that he doesn’t have one) might be useful.
I’m trying to be canon-compliant in terms of worldbuilding and all characters except Bella. So, yeah, Alice’s power works pretty much the same way in the originals, although I’ve found excuses to bring it up more frequently.
My (largely surface level, not physics-inspired) explanation for her limitations is that people’s decisions are the only things liable to be affected by her advance knowledge. The weather doesn’t change based on what she or anyone else does, and so she is a perfectly reliable meteorologist except at time scales so far in advance that butterfly effect type stuff starts adding up. However, when she sees things, even if she tells no one, her own decisions change, and so does what Edward can see in her head, etc.
Alice’s visions are: a) purely visual, b) do not tend to have vantage points from inside of objects, c) are only mostly under her control, and d) can be changed even after she’s become very confident in them. She cannot see inside people’s future minds. She can’t perfectly decide what to see. The sorts of things that upset her visions after she’s had them are reactions to that vision itself in her or others (like when she sees Bella telling her something that there is subsequently no reason for Bella to share), decisions that haven’t been made yet being prompted by other factors (like when James’s coven heard the baseball game), and people deliberately exploiting the nature of her visions (the way James did).
In canon, it turns out that she can’t see Quileute wolves or half-vampires, nor things they affect, at all—the initial supposition is that Quileutes are just too unpredictable because of how their shapeshifting is unexpected, but that’s patently ridiculous. They often do predictable things and many have good control over their “phasing”. In the last book she blames the gap on “half breeds” in general and says she can see humans because she was one and vampires because she is one. In neither case does canon indicate that she actually knows what’s going on instead of just guessing, so I’m going to freely invent here.
As a character, Bella cares somewhat more about what it is that various powers can accomplish than the mechanism that makes them work. It is obvious to her from the moment she learns about Alice’s abilities that Alice has the ability to save her from various miserable fates like getting hit by a car. (In canon, Edward had to risk exposure to knock her out of the way, because he’d kept telling Alice to stay out of the situation and she was getting only involuntary Bella-visions, not paying attention to her deliberately for informational purposes.) Bella is unlikely to devote considerable time to devising Alicey experiments until the very serious gaps in precog as used with wolves and half-vamps come up. At that time, learning how the power works can confer a significant practical improvement if there’s a way to work around it. Before, Alice is unlikely to become significantly more valuable via information about the nuts and bolts of her visions.
Well, obviously her visions only show her the future as it would have been, had she not received that vision, or something like that. But since she can see counter-factual futures the fact that her reaction would undo a particular future can’t explain why she doesn’t see that future in the first place.
You could further stipulate that she can only see futures that are indiscriminate to the way her visions counter-factually ended up not to containing them , i. e. that turn out the same whether she counter-factually changed her mind on trying to see that future or counter-factually tried but failed, or something similar (e. g. that she counter-factually received some other vision, counter-factually based on yet another, but only when the visions cycle between a limited number of possibilities, which leads to similar results with somewhat different details).
This would not explain why her visions fail even if the undecided subject is sufficiently far away not to be influenced by nuances of her counter-factual reactions, i. e. when trying to see what someone is doing between now and time X when X is the earliest possible point when she could affect anything. Maybe she can’t see anything she couldn’t possibly see with her own eyes no matter what she does, so she can’t receive those sorts of visions in the first place. But finding out would still be useful (e. g. so she could arrange for really fast transportation on hand to increase her vision range). And if what’s blocking her visions vs. “undecided” things is of the sort speculated about above she could circumvent it with suitable precommitments and staying outside the subjects actual sensory range (while the subject is still within her potential sensory range).
As for your explanations why your Bella does not care to find out, it’s your character, but it does seem somewhat at odds with the stated irresistibility of mysteries for her, or the way she insists on testing the various vampire abilities. Also finding out might have allowed for a more convenient way to block her from reading her notebook entries, and possibly allowed her to find James.
You know, I’d never thought I’d stay up late (2AM!) reading Twilight fanfiction. I’ve done it for Harry Potter, but Twilight?
In any case, possible manipulation of Alice’s precognition makes me think of Kavka’s toxin puzzle
In particular, if Bella intends to do something knowing it will trigger Alice to see it in the future, but then the payoff comes through via Alice seeing it… it’s not quite the same setup.
I don’t think this actually occurs in the story up to where I’m currently at (I recall a scene where Bella mentions she’ll do something/say something to Alice, Alice sees it in the future and explains it, Bella questions it, and Alice says that what she saw was what would have happened if she hadn’t seen the vision), but I do wonder if this could apply some limitations in some form - ‘I intend to do X that I really really don’t want to do, but will do it because it will trigger Alice’s vision which will then make it so that I don’t have to do it’.
If you imagine that the future is modeled by differential equations, and that Alice can discern the coefficients for these equations to within a certain value, there will be certain equations which are obviously determined (f~=f’ and others which aren’t (f~=.0001*f’)
In the same way that people feel that tables are solid, because the reaction to their hand is fairly determined, and pillows are soft, because the reaction depends on how you’re moving your hand, it makes sense (to me) that predicting the future could easily have fuzzy areas.
If this isn’t how it works, then Alice should be able to do NP-complete problems in polynomial time, as in rationalist!Harry’s experience with the time turner. Perhaps this will be explored?
Kind of… but mostly I just mean if you have a problem that is easy to check if you have the answer but hard to find the answer in the first place (this is what NP-complete means) then Alice can look at the future when you found the answer and told her what it is, then tell you the answer now and you only have to check it. (locating the answer is what would make NP = P, iirc. There’s currently a proof that this is impossible in peer review.)
If there is some kind of fuzziness like I described then she might just see a piece of paper with an equation or something. If she can read text then this is a serious problem for luminosity!theoretical compute science.
It would take almost as long to check things like this with Alice as it would to manually check things alone. You’d have to make up your mind to check each possible answer individually, and then Alice could see what would happen if you went on to actually check it. It might save a little time relative to actually going through elaborate checking rigmarole, although adding a second person to the task would make it more person-hours on net.
But you can’t just make up your mind to check every possible answer, without picking out a specific one and going, “I’m planning to check this one and then stop”. That isn’t a plausible thing for you to decide to do if you actually know how hard it’s supposed to be, because there’s no way you’ll actually do it. So Alice can’t see you finding the answer.
Exploiting causal loops to solve NP problems does not involve checking all candidates in sequence and then transporting the answer back. Rather, it involves checking only one candidate, but deciding which candidate to check in such a way that the situation is self-consistent if and only if that one candidate is the correct answer. In context, this depends on being able to foresee the outcome of a simple firmly decided conditional strategy, where the events you plan to condition on are the contents of the vision itself.
So if the visions are generated by a computationally unbounded process that extrapolates from inexact snapshots of the present (which include plans and dispositions but not some of the other contents of minds), then the NP trick could work: The dependency of the future on Alice’s reaction to the vision is well-defined and available to the extrapolation process. Or it could just give her a headache; that’s self-consistent too.
If the vision generator refuses to hypothesize any visions within the extrapolation process, or if it doesn’t care whether extrapolated-Alice gets false visions, or if it’s computationally bounded and only iterates towards a fixed point at a limited rate, then the trick would fail.
And if it’s not extrapolation-based, then I dunno, but I can’t think of any interpretations that would be incompatible with a headache.
But the point is that you can solve these problems, just that it takes significantly longer to solve them than it does to check the answer once you have it.
So you decide to do a brute force calculation on a computer for three days, write down the answer and tell her (a la kavitra’s comment) then she sees the answer on the paper in the future and you change your mind and just check the answer.
Because Alice sees without the prediction, there’s no need to have a stable causal loop like pengvado discusses.
So it’s mostly useful for problems of moderate solvability, that you could solve without the power, given a considerable effort (but not more effort than you would be willing to actually put forth).
For example, you could set Hashcash to mint some expensive stamp, and firmly decide to check its output in three days.
(For those who don’t want to take the time to read the link, Hashcash is a proof-of-work system based on brute-force partial preimage attacks against hash (one-way) algorithms.)
My biggest issue with the fic so far is that the bizzare way Alice’s precognition interacts with peoples decisions (I have no idea whether it works like that in the original novels since I haven’t read a word from them) doesn’t seem to puzzle Bella at all.
If people are just a physical system like any other Alice’s powers should be able to predict them just like any other physical system. Since they are strangely unpredictable for her exploring what other physical systems are similarly unpredictable should provide information on how people work. For instance if Alice can’t predict anything influenced by quantum events that would be a big clue that quantum events are involved in cognition. if she can’t predict anything sufficiently chaotic (in the chaos theory sense) that would give away how chaotic people’s decision making is.
If nothing inanimate poses similar difficulties that points towards either a psychological limitation she might be able to break out of, or the existence of some sort of magical free will in that universe. Do any animals have it? In addition to humans vampires obviously also do. And pointing this out to Edward (who apparently believes in the existence of souls, but also that he doesn’t have one) might be useful.
I’m trying to be canon-compliant in terms of worldbuilding and all characters except Bella. So, yeah, Alice’s power works pretty much the same way in the originals, although I’ve found excuses to bring it up more frequently.
My (largely surface level, not physics-inspired) explanation for her limitations is that people’s decisions are the only things liable to be affected by her advance knowledge. The weather doesn’t change based on what she or anyone else does, and so she is a perfectly reliable meteorologist except at time scales so far in advance that butterfly effect type stuff starts adding up. However, when she sees things, even if she tells no one, her own decisions change, and so does what Edward can see in her head, etc.
Alice’s visions are: a) purely visual, b) do not tend to have vantage points from inside of objects, c) are only mostly under her control, and d) can be changed even after she’s become very confident in them. She cannot see inside people’s future minds. She can’t perfectly decide what to see. The sorts of things that upset her visions after she’s had them are reactions to that vision itself in her or others (like when she sees Bella telling her something that there is subsequently no reason for Bella to share), decisions that haven’t been made yet being prompted by other factors (like when James’s coven heard the baseball game), and people deliberately exploiting the nature of her visions (the way James did).
In canon, it turns out that she can’t see Quileute wolves or half-vampires, nor things they affect, at all—the initial supposition is that Quileutes are just too unpredictable because of how their shapeshifting is unexpected, but that’s patently ridiculous. They often do predictable things and many have good control over their “phasing”. In the last book she blames the gap on “half breeds” in general and says she can see humans because she was one and vampires because she is one. In neither case does canon indicate that she actually knows what’s going on instead of just guessing, so I’m going to freely invent here.
As a character, Bella cares somewhat more about what it is that various powers can accomplish than the mechanism that makes them work. It is obvious to her from the moment she learns about Alice’s abilities that Alice has the ability to save her from various miserable fates like getting hit by a car. (In canon, Edward had to risk exposure to knock her out of the way, because he’d kept telling Alice to stay out of the situation and she was getting only involuntary Bella-visions, not paying attention to her deliberately for informational purposes.) Bella is unlikely to devote considerable time to devising Alicey experiments until the very serious gaps in precog as used with wolves and half-vamps come up. At that time, learning how the power works can confer a significant practical improvement if there’s a way to work around it. Before, Alice is unlikely to become significantly more valuable via information about the nuts and bolts of her visions.
Well, obviously her visions only show her the future as it would have been, had she not received that vision, or something like that. But since she can see counter-factual futures the fact that her reaction would undo a particular future can’t explain why she doesn’t see that future in the first place.
You could further stipulate that she can only see futures that are indiscriminate to the way her visions counter-factually ended up not to containing them , i. e. that turn out the same whether she counter-factually changed her mind on trying to see that future or counter-factually tried but failed, or something similar (e. g. that she counter-factually received some other vision, counter-factually based on yet another, but only when the visions cycle between a limited number of possibilities, which leads to similar results with somewhat different details).
This would not explain why her visions fail even if the undecided subject is sufficiently far away not to be influenced by nuances of her counter-factual reactions, i. e. when trying to see what someone is doing between now and time X when X is the earliest possible point when she could affect anything. Maybe she can’t see anything she couldn’t possibly see with her own eyes no matter what she does, so she can’t receive those sorts of visions in the first place. But finding out would still be useful (e. g. so she could arrange for really fast transportation on hand to increase her vision range). And if what’s blocking her visions vs. “undecided” things is of the sort speculated about above she could circumvent it with suitable precommitments and staying outside the subjects actual sensory range (while the subject is still within her potential sensory range).
As for your explanations why your Bella does not care to find out, it’s your character, but it does seem somewhat at odds with the stated irresistibility of mysteries for her, or the way she insists on testing the various vampire abilities. Also finding out might have allowed for a more convenient way to block her from reading her notebook entries, and possibly allowed her to find James.
You know, I’d never thought I’d stay up late (2AM!) reading Twilight fanfiction. I’ve done it for Harry Potter, but Twilight?
In any case, possible manipulation of Alice’s precognition makes me think of Kavka’s toxin puzzle
In particular, if Bella intends to do something knowing it will trigger Alice to see it in the future, but then the payoff comes through via Alice seeing it… it’s not quite the same setup.
I don’t think this actually occurs in the story up to where I’m currently at (I recall a scene where Bella mentions she’ll do something/say something to Alice, Alice sees it in the future and explains it, Bella questions it, and Alice says that what she saw was what would have happened if she hadn’t seen the vision), but I do wonder if this could apply some limitations in some form - ‘I intend to do X that I really really don’t want to do, but will do it because it will trigger Alice’s vision which will then make it so that I don’t have to do it’.
If you imagine that the future is modeled by differential equations, and that Alice can discern the coefficients for these equations to within a certain value, there will be certain equations which are obviously determined (f~=f’ and others which aren’t (f~=.0001*f’)
In the same way that people feel that tables are solid, because the reaction to their hand is fairly determined, and pillows are soft, because the reaction depends on how you’re moving your hand, it makes sense (to me) that predicting the future could easily have fuzzy areas.
If this isn’t how it works, then Alice should be able to do NP-complete problems in polynomial time, as in rationalist!Harry’s experience with the time turner. Perhaps this will be explored?
Wow, that sounds like including it might involve math. Sorry.
Kind of… but mostly I just mean if you have a problem that is easy to check if you have the answer but hard to find the answer in the first place (this is what NP-complete means) then Alice can look at the future when you found the answer and told her what it is, then tell you the answer now and you only have to check it. (locating the answer is what would make NP = P, iirc. There’s currently a proof that this is impossible in peer review.)
If there is some kind of fuzziness like I described then she might just see a piece of paper with an equation or something. If she can read text then this is a serious problem for luminosity!theoretical compute science.
It would take almost as long to check things like this with Alice as it would to manually check things alone. You’d have to make up your mind to check each possible answer individually, and then Alice could see what would happen if you went on to actually check it. It might save a little time relative to actually going through elaborate checking rigmarole, although adding a second person to the task would make it more person-hours on net.
But you can’t just make up your mind to check every possible answer, without picking out a specific one and going, “I’m planning to check this one and then stop”. That isn’t a plausible thing for you to decide to do if you actually know how hard it’s supposed to be, because there’s no way you’ll actually do it. So Alice can’t see you finding the answer.
Exploiting causal loops to solve NP problems does not involve checking all candidates in sequence and then transporting the answer back. Rather, it involves checking only one candidate, but deciding which candidate to check in such a way that the situation is self-consistent if and only if that one candidate is the correct answer. In context, this depends on being able to foresee the outcome of a simple firmly decided conditional strategy, where the events you plan to condition on are the contents of the vision itself.
So if the visions are generated by a computationally unbounded process that extrapolates from inexact snapshots of the present (which include plans and dispositions but not some of the other contents of minds), then the NP trick could work: The dependency of the future on Alice’s reaction to the vision is well-defined and available to the extrapolation process. Or it could just give her a headache; that’s self-consistent too.
If the vision generator refuses to hypothesize any visions within the extrapolation process, or if it doesn’t care whether extrapolated-Alice gets false visions, or if it’s computationally bounded and only iterates towards a fixed point at a limited rate, then the trick would fail.
And if it’s not extrapolation-based, then I dunno, but I can’t think of any interpretations that would be incompatible with a headache.
But Alice’s power doesn’t work like that. It predicts the future conditional on Alice not having seen the prediction.
But the point is that you can solve these problems, just that it takes significantly longer to solve them than it does to check the answer once you have it.
So you decide to do a brute force calculation on a computer for three days, write down the answer and tell her (a la kavitra’s comment) then she sees the answer on the paper in the future and you change your mind and just check the answer.
Because Alice sees without the prediction, there’s no need to have a stable causal loop like pengvado discusses.
So it’s mostly useful for problems of moderate solvability, that you could solve without the power, given a considerable effort (but not more effort than you would be willing to actually put forth).
For example, you could set Hashcash to mint some expensive stamp, and firmly decide to check its output in three days.
(For those who don’t want to take the time to read the link, Hashcash is a proof-of-work system based on brute-force partial preimage attacks against hash (one-way) algorithms.)