Sorry, I was sort of asking a general question and putting it in the terms of this particular problem at the same time. I should have been clearer.
What I meant was “I like TDC, but I think it’s insufficient, it doesn’t seem to easily deal with the fact that the physical implementation of the abstract computation can potentially end up having other things happen that result in something OTHER than what the ideal platonic would say should happen”
I think though that my initial suggestion might not have been the right solution. Instead, maybe invert it, say “actual initial state of hardware/software/etc” feeds into “selector that selects a platonic algorithm” which then feeds into “output”… then, depending on how you want to look at it, have other external stuff, radiation, damage to hardware occurring mid computation, etc etc etc have causal inputs into those last two nodes. My initial thought would be the second to last node.
The idea here being that such errors change which platonic computation actually occurred.
Then you can say stuff in terms in decisions being choosing “what does the abstract computation that I am at this moment output?”, with the caveat of “but I’m not absolutely certain that I am computing the specific algorithm that I think I am”… so that is where one could place the uncertainty that arises from hardware bugaboos, etc etc. (Also, logical uncertainty perhaps about if your code actually implements algorithms that you think it does, if that’s relevant.)
I’m still having trouble seeing what troubles you. Yes, the physical hardware might mess up the attempt to implement the Platonic algorithm. So, there’s a probability of Omega guessing wrong, but if Omega picks your most likely action, it will still better approximate it by just using the platonic algorithm instead of the platonic algorithm plus noise.
Also, as Eliezer_Yudkowsky keeps pointing out, you don’t want an agent that computes “what does the abstract computation that I am at this moment output?” because whatever it picks, it’s correct.
with the caveat of “but I’m not absolutely certain that I am computing the specific algorithm that I think I am”… so that is where one could place the uncertainty that arises from hardware bugaboos, etc etc.
AnnaSalamon didn’t mention this, but under Pearl’s model of causal networks, each node is implicitly assumed to have an “external unknown factor” parent (all of such factors assumed independent of each other), so this uncertaintly is already in the model. So, like any other node, the agent takes this kind of uncertainty into account.
What I meant is that for TDT, the agent, for lack of a better word, decides what the outcome for a certain abstract algorithm is. (Specifically, the abstract algorithm that it is using to decide that.)
The agent can reason about other systems computing the related algorithms producing related output, so it knows that what it chooses will be reflected in those other systems.
But, I’d want it to be able to take into account the fact that the algorithm it’s actually computing is not necessarally the algorithm it thinks it is computing. That is, due to hardware error or whatever, it may produce an output other than what the abstract calculation it thought it was doing would have produced… thus breaking the correlation it was assuming.
ie, I just want some way for the agent to be able to take into account in all this the possibility of errors in the hardware and so on, and in the raw TDT there didn’t seem to be a convenient way to do that. Adding in an extra layer of indirection, setting up the causal net as saying “my innards” control a selector which determines which abstract algorithm is actually being computed would SEEM to fix that in a way that, to me, seems to actually fit what’s actually going on.
If we assume a weaker “Omega”, that can’t predict, say, a stray cosmic ray hitting you and causing you to make a 1 bit error or whatever in your decision algorithm, even though it has a copy of your exact algorithm, then that’s where what I’m talking about comes in. In that case, your output would derive from the same abstract computation as Omega’s prediction for your output.
Imagine the set of all possible algorithms feed into a “my selector node”, and also into omega’s “prediction selector node”… then “my innards” are viewed as selecting which of those determine the output. But a stray cosmic ray comes in, influences the computation… that is, influences which algorithm the “selector” selects.
A stray cosmic ray can’t actually alter an abstract platonic algorithm. Yet it is able to influence the output. So we have to have some way of shoving into TDT the notion of “stuff that actually physically messes with the computation”
Does that clarify what I’m saying here, or am I describing it poorly, or am I just really wrong about all this?
Okay, I think I see what you’re saying: There is the possibility of something making your action diverge from the Platonic computation you think you’re instantiating, and that would interfere with the relationship between the choice you make and the Platonic algorithm.
On top of that, you say that there should be a “My innards” node between the platonic algorithm node and the action node.
However, you claim Omega can’t detect this kind of interference. Therefore, the inteference is independent of the implicit interference with all the other nodes and does not need to be represented. (See my remark about how Pearl networks implicitly have an error term parent for every node, and only need to be explicity represented when two or more of these error parents are not independent.)
Also, since it would still be an uninterrupted path from Platonic to choice, the model doesn’t gain anything by this intermediate steps; Pearl nets allow you to collapse these into one edge/node.
And, of course, it doesn’t make much of a difference for Omega’s accuracy anyway...
Yeah, I think you’ve got the point of the problem I’m trying to deal with, though I’m not sure I communicated my current view of the structure of what the solution should be. For one thing, I said that my initial plan, platonic algorithm pointing to innards pointing to output was wrong.
There may potentially be platonic algorithm pointing to innards representing the notion of “intent of the original programmer” or whatever, but I figured more importantly is an inversion is that.
Ie, start with innards… the initial code/state/etc “selects” a computation from the platonic space of all possible computations. But, say, a stray cosmic ray may interfere with the computation. This would be analogous to an external factor poking the selector, shifting which abstract algorithm is the one being computed. So then “omega” (in quotes because am assuming a slightly less omniscient being than usually implied by the name) would be computing the implications of one algorithm, while your output would effectively be the output if a different algorithm. So that weakens the correlation that justifies PD coopoperation, Newcomb one-boxing, etc etc etc etc...
I figure the “innards → selector from the space of algorithms” structure would seem to be the right way to represent this possibility. It’s not exactly just logical uncertainty.
So, I don’t quite follow how this is collapsible. ie, It’s not obvious to me that the usual error terms help with this specific issue without the extra node. Unless, maybe, we allow the “output” node to be separate from the “algorithm” node and let us interpret the extra uncertianty term from the output node as something that (weakly) decouples the output from the abstract algorithm...
So then “omega” (in quotes because am assuming a slightly less omniscient being than usually implied by the name) would be computing the implications of one algorithm, while your output would effectively be the output if a different algorithm. So that weakens the correlation that justifies PD coopoperation, Newcomb one-boxing, etc etc etc etc...
Yes, but like I said the first time around, this would be a rare event, rare enough to be discounted if all that Omega cares about is maximizing the chance of guessing correctly. If Omega has some other preferences over the outcomes (a “safe side” it wants to err on), and if the chance is large enough, it may have to change its choice based on this possibility.
So, here’s what I have your preferred representation as:
“Platonic space of algorithms” and “innards” both point to “selector” (the actual space of algorithms influences the selector, I assume); “innards” and “Platonic space” also together point to “Omega’s prediction”, but selector does not, because your omega can’t see the things that can cause it to err. Then, “Omega’s prediction” points to box content and selector points to your choice. Then, of course, box content and your choice point to payout.
Further, you say the choice the agent makes is at the innards node.
Even if rare, the decision theory used should at least be able to THINK ABOUT THE IDEA of a hardware error or such. Even if it dismisses it as not worth considering, it should at least have some means of describing the situation. ie, I am capable of at least considering the possibility of me having brain damage or whatever. Our decision theory should be capable of no less.
Sorry if I’m unclear here, but my focus isn’t so much on omega as trying to get a version of TDT that can at least represent that sort of situation.
You seem to more or less have it right. Except I’d place the choice more at the selector or at the “node that represents the specific abstract algorithm that actually gets used”
As per TDT, choose as if you get to decide what the output for the abstract algorithm should be. The catch is that here there’s a bit of uncertainty as to which abstract algorithm is being computed. So if, due to cosmic ray striking and causing a bitflip at a certain point in the computation, you end up actually computing algorithm 1B while omega models you as being algorithm 1A, then that’d be potentially a weakening of the dependence. (Again, just using the Newcomb problem simply as a way of talking about this.)
You seem to more or less have it right. Except I’d place the choice more at the selector or at the “node that represents the specific abstract algorithm that actually gets used”.
Okay, so there’d be another node between “algorithm selector” and “your choice of box”; that would still be an uninterrupted path (chain) and so doesn’t affect the result.
The problem, then, is that if you take the agent’s choice as being at “algorithm selector”, or any descendant through “your choice of box”, you’ve d-separated “your choice of box” from “Omega’s prediction”, meaning that Omega’s prediction is conditionally independent of “your choice of box”, given the agent’s choice. (Be careful to distinguish “your choice of box” from where we’re saying the agent is making a choice.)
But then, we know that’s not true, and it would reduce your model to the “innards CSA” that AnnaSalamon gave above. (The parent of “your choice of box” has no parents.)
So I don’t think that’s an accurate representation of the situation, or consistent with TDT. So the agent’s choice must be occuring at the “innards node” in your graph.
(Note: this marks the first time I’ve drawn a causal Bayesian network and used the concept of d-separation to approach a new problem. w00t! And yes, this would be easier if I uploaded pictures as I went.)
Okay, so there’d be another node between “algorithm selector” and “your choice of box”;
Not sure where you’re getting that extra node from. The agent’s choice is the output of the abstract algorithm they actually end up computing as a result of all the physical processes that occur.
Abstract algorithm space feeds into both your algorithm selector node and the algorithm selector node in “omega”’s model of you. That’s where the dependence comes from.
So given logical uncertainty about the output of the algorithm, wouldn’t they be d-connected? They’d be d-separated if the choice was already known… but if it was, there’d be nothing left to choose, right? No uncertainties to be dependent on each other in the first place.
Actually, maybe I ought draw a diagram of what I have in mind and upload to imgur or whatever.
Alright, after thinking about your points some more, and refining the graph, here’s my best attempt to generate one that includes your concerns: Link.
Per AnnaSalamon’s convention, the agent’s would-node-surgery is in a square box, with the rest elliptical and the payoff octagonal. Some nodes included for clarity that would normally be left out. Dotted lines indicate edges that are cut for surgery when fixing “would” node. One link I wasn’t sure about has a ”?”, but it’s not that important.
Important points: The cutting of parents for the agent’s decision preserves d-connection between box choice and box content. Omega observes innards and attempted selection of algorithm but retains uncertainty as to how the actual algorithm plays out. Innards contribute to hardware failures to accurately implement algorithm (as do [unshown] exogenous factors).
And I do hope you follow up, given my efforts to help you spell out your point.
Just placing this here now as sort of a promise to follow up. Just that I’m running on insufficient sleep, so can only do “easy stuff” at the moment. :) I certainly plan on following up on our conversation in more detail, once I get a good night’s sleep.
Having looked at your diagram now, that’s not quite what I have in mind. For instance, “what I attempt to implement” is kinda an “innards” issue rather than deserving a separate box in this context.
Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you’re doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I’m going to have to come up with some other notation to represent this.
Before we continue… do you have any objection to me making a top level posting for this (drawing out an attempt to diagram what I have in mind and so on?) frankly, even if my solution is complete nonsense, I really do think that this problem is an issue that needs to be dealt with as a larger issue.
Begun working on the diagram, still thinking out though exact way to draw it. I’ll probably have to use a crude hack of simply showing lots of surgery points and basically saying “do surgery at each of these one at a time, weighing the outcome by the probability that that’s the one you’re actually effectively operating on” (This will (hopefully) make more sense in the larger post)
Having looked at your diagram now, that’s not quite what I have in mind. For instance, “what I attempt to implement” is kinda an “innards” issue rather than deserving a separate box in this context.
Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you’re doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I’m going to have to come up with some other notation to represent this. … I’ll probably have to use a crude hack of simply showing lots of surgery points and basically saying “do surgery at each of these one at a time, weighing the outcome by the probability that that’s the one you’re actually effectively operating on”
Not that weird, actually. I think you can do that by building a probabilistic twin network. See the good Pearl summary, slide 26. Instead of using it for a counterfactual, surgically set a different node in each subnetwork, and also the probabilities coming from the common parent (U in slide 26) to represent the probability of each subnetwork being the right one. Then use all terminal nodes across both subnetworks as the outcome set for calculating probability.
Though I guess that amounts to what you were planning anyway. Another way might be to use multiple dependent exogenous variables that capture the effect of cutting one edge when you thought you were cutting another.
Before we continue… do you have any objection to me making a top level posting for this
No problem, just make sure to link this discussion.
And I said that was more or less right, didn’t I? ie, “what I attempt to implement” ~= “innards”, which points to “selector”/”output”, which selects what actually gets used.
Looking through the second link (ie, the slides) now
Okay, I think there are some terminological issues to sort out here, resulting from our divergence from AnnaSalamon’s original terminology.
The discussion I thought we were having corresponds to the CSA’s calculation of “woulds”. And when you calculate a would, you surgically set the output of the node, which means cutting the links to its parents.
Is this where we are? Are you saying the “would” should be calculated from surgery on the “algorithm selector” node (which points to “choice of box”)? Because in that case, the links to “algorithm selector” from “algorithm space” and “innards” are cut, which d-separates them. (ETA: to clarify: d-separates “box choice” from Omega and its descendants.)
OTOH, even if you follow my suggestion and do surgery on “innards”, the connection between “box choice” and “omega’s prediction” is only a weak link—algorithm space is huge.
Perhaps you also want an arrow from “algorithm selector” to “omega’s prediction” (you don’t need a separate node for “Omega’s model of your selector” because it chains). Then, the possible difference between the box choice and omega’s prediction emerges from the independent error term pointing to box choice (which accounts for cosmic rays, hardware errors, etc.) There is a separate (implicit) “error parent” for the “Omega’s prediction” node, which accounts for shortcomings of Omega’s model.
This preserves d-connection (between box choice and box content) after a surgery on “algorithm selector”. Is that what you’re aiming for?
Sorry, I was sort of asking a general question and putting it in the terms of this particular problem at the same time. I should have been clearer.
What I meant was “I like TDC, but I think it’s insufficient, it doesn’t seem to easily deal with the fact that the physical implementation of the abstract computation can potentially end up having other things happen that result in something OTHER than what the ideal platonic would say should happen”
I think though that my initial suggestion might not have been the right solution. Instead, maybe invert it, say “actual initial state of hardware/software/etc” feeds into “selector that selects a platonic algorithm” which then feeds into “output”… then, depending on how you want to look at it, have other external stuff, radiation, damage to hardware occurring mid computation, etc etc etc have causal inputs into those last two nodes. My initial thought would be the second to last node.
The idea here being that such errors change which platonic computation actually occurred.
Then you can say stuff in terms in decisions being choosing “what does the abstract computation that I am at this moment output?”, with the caveat of “but I’m not absolutely certain that I am computing the specific algorithm that I think I am”… so that is where one could place the uncertainty that arises from hardware bugaboos, etc etc. (Also, logical uncertainty perhaps about if your code actually implements algorithms that you think it does, if that’s relevant.)
I’m still having trouble seeing what troubles you. Yes, the physical hardware might mess up the attempt to implement the Platonic algorithm. So, there’s a probability of Omega guessing wrong, but if Omega picks your most likely action, it will still better approximate it by just using the platonic algorithm instead of the platonic algorithm plus noise.
Also, as Eliezer_Yudkowsky keeps pointing out, you don’t want an agent that computes “what does the abstract computation that I am at this moment output?” because whatever it picks, it’s correct.
AnnaSalamon didn’t mention this, but under Pearl’s model of causal networks, each node is implicitly assumed to have an “external unknown factor” parent (all of such factors assumed independent of each other), so this uncertaintly is already in the model. So, like any other node, the agent takes this kind of uncertainty into account.
What I meant is that for TDT, the agent, for lack of a better word, decides what the outcome for a certain abstract algorithm is. (Specifically, the abstract algorithm that it is using to decide that.)
The agent can reason about other systems computing the related algorithms producing related output, so it knows that what it chooses will be reflected in those other systems.
But, I’d want it to be able to take into account the fact that the algorithm it’s actually computing is not necessarally the algorithm it thinks it is computing. That is, due to hardware error or whatever, it may produce an output other than what the abstract calculation it thought it was doing would have produced… thus breaking the correlation it was assuming.
ie, I just want some way for the agent to be able to take into account in all this the possibility of errors in the hardware and so on, and in the raw TDT there didn’t seem to be a convenient way to do that. Adding in an extra layer of indirection, setting up the causal net as saying “my innards” control a selector which determines which abstract algorithm is actually being computed would SEEM to fix that in a way that, to me, seems to actually fit what’s actually going on.
If we assume a weaker “Omega”, that can’t predict, say, a stray cosmic ray hitting you and causing you to make a 1 bit error or whatever in your decision algorithm, even though it has a copy of your exact algorithm, then that’s where what I’m talking about comes in. In that case, your output would derive from the same abstract computation as Omega’s prediction for your output.
Imagine the set of all possible algorithms feed into a “my selector node”, and also into omega’s “prediction selector node”… then “my innards” are viewed as selecting which of those determine the output. But a stray cosmic ray comes in, influences the computation… that is, influences which algorithm the “selector” selects.
A stray cosmic ray can’t actually alter an abstract platonic algorithm. Yet it is able to influence the output. So we have to have some way of shoving into TDT the notion of “stuff that actually physically messes with the computation”
Does that clarify what I’m saying here, or am I describing it poorly, or am I just really wrong about all this?
Okay, I think I see what you’re saying: There is the possibility of something making your action diverge from the Platonic computation you think you’re instantiating, and that would interfere with the relationship between the choice you make and the Platonic algorithm.
On top of that, you say that there should be a “My innards” node between the platonic algorithm node and the action node.
However, you claim Omega can’t detect this kind of interference. Therefore, the inteference is independent of the implicit interference with all the other nodes and does not need to be represented. (See my remark about how Pearl networks implicitly have an error term parent for every node, and only need to be explicity represented when two or more of these error parents are not independent.)
Also, since it would still be an uninterrupted path from Platonic to choice, the model doesn’t gain anything by this intermediate steps; Pearl nets allow you to collapse these into one edge/node.
And, of course, it doesn’t make much of a difference for Omega’s accuracy anyway...
Yeah, I think you’ve got the point of the problem I’m trying to deal with, though I’m not sure I communicated my current view of the structure of what the solution should be. For one thing, I said that my initial plan, platonic algorithm pointing to innards pointing to output was wrong.
There may potentially be platonic algorithm pointing to innards representing the notion of “intent of the original programmer” or whatever, but I figured more importantly is an inversion is that.
Ie, start with innards… the initial code/state/etc “selects” a computation from the platonic space of all possible computations. But, say, a stray cosmic ray may interfere with the computation. This would be analogous to an external factor poking the selector, shifting which abstract algorithm is the one being computed. So then “omega” (in quotes because am assuming a slightly less omniscient being than usually implied by the name) would be computing the implications of one algorithm, while your output would effectively be the output if a different algorithm. So that weakens the correlation that justifies PD coopoperation, Newcomb one-boxing, etc etc etc etc...
I figure the “innards → selector from the space of algorithms” structure would seem to be the right way to represent this possibility. It’s not exactly just logical uncertainty.
So, I don’t quite follow how this is collapsible. ie, It’s not obvious to me that the usual error terms help with this specific issue without the extra node. Unless, maybe, we allow the “output” node to be separate from the “algorithm” node and let us interpret the extra uncertianty term from the output node as something that (weakly) decouples the output from the abstract algorithm...
Yes, but like I said the first time around, this would be a rare event, rare enough to be discounted if all that Omega cares about is maximizing the chance of guessing correctly. If Omega has some other preferences over the outcomes (a “safe side” it wants to err on), and if the chance is large enough, it may have to change its choice based on this possibility.
So, here’s what I have your preferred representation as:
“Platonic space of algorithms” and “innards” both point to “selector” (the actual space of algorithms influences the selector, I assume); “innards” and “Platonic space” also together point to “Omega’s prediction”, but selector does not, because your omega can’t see the things that can cause it to err. Then, “Omega’s prediction” points to box content and selector points to your choice. Then, of course, box content and your choice point to payout.
Further, you say the choice the agent makes is at the innards node.
Is that about right?
Even if rare, the decision theory used should at least be able to THINK ABOUT THE IDEA of a hardware error or such. Even if it dismisses it as not worth considering, it should at least have some means of describing the situation. ie, I am capable of at least considering the possibility of me having brain damage or whatever. Our decision theory should be capable of no less.
Sorry if I’m unclear here, but my focus isn’t so much on omega as trying to get a version of TDT that can at least represent that sort of situation.
You seem to more or less have it right. Except I’d place the choice more at the selector or at the “node that represents the specific abstract algorithm that actually gets used”
As per TDT, choose as if you get to decide what the output for the abstract algorithm should be. The catch is that here there’s a bit of uncertainty as to which abstract algorithm is being computed. So if, due to cosmic ray striking and causing a bitflip at a certain point in the computation, you end up actually computing algorithm 1B while omega models you as being algorithm 1A, then that’d be potentially a weakening of the dependence. (Again, just using the Newcomb problem simply as a way of talking about this.)
Okay, so there’d be another node between “algorithm selector” and “your choice of box”; that would still be an uninterrupted path (chain) and so doesn’t affect the result.
The problem, then, is that if you take the agent’s choice as being at “algorithm selector”, or any descendant through “your choice of box”, you’ve d-separated “your choice of box” from “Omega’s prediction”, meaning that Omega’s prediction is conditionally independent of “your choice of box”, given the agent’s choice. (Be careful to distinguish “your choice of box” from where we’re saying the agent is making a choice.)
But then, we know that’s not true, and it would reduce your model to the “innards CSA” that AnnaSalamon gave above. (The parent of “your choice of box” has no parents.)
So I don’t think that’s an accurate representation of the situation, or consistent with TDT. So the agent’s choice must be occuring at the “innards node” in your graph.
(Note: this marks the first time I’ve drawn a causal Bayesian network and used the concept of d-separation to approach a new problem. w00t! And yes, this would be easier if I uploaded pictures as I went.)
Not sure where you’re getting that extra node from. The agent’s choice is the output of the abstract algorithm they actually end up computing as a result of all the physical processes that occur.
Abstract algorithm space feeds into both your algorithm selector node and the algorithm selector node in “omega”’s model of you. That’s where the dependence comes from.
So given logical uncertainty about the output of the algorithm, wouldn’t they be d-connected? They’d be d-separated if the choice was already known… but if it was, there’d be nothing left to choose, right? No uncertainties to be dependent on each other in the first place.
Actually, maybe I ought draw a diagram of what I have in mind and upload to imgur or whatever.
Alright, after thinking about your points some more, and refining the graph, here’s my best attempt to generate one that includes your concerns: Link.
Per AnnaSalamon’s convention, the agent’s would-node-surgery is in a square box, with the rest elliptical and the payoff octagonal. Some nodes included for clarity that would normally be left out. Dotted lines indicate edges that are cut for surgery when fixing “would” node. One link I wasn’t sure about has a ”?”, but it’s not that important.
Important points: The cutting of parents for the agent’s decision preserves d-connection between box choice and box content. Omega observes innards and attempted selection of algorithm but retains uncertainty as to how the actual algorithm plays out. Innards contribute to hardware failures to accurately implement algorithm (as do [unshown] exogenous factors).
And I do hope you follow up, given my efforts to help you spell out your point.
Just placing this here now as sort of a promise to follow up. Just that I’m running on insufficient sleep, so can only do “easy stuff” at the moment. :) I certainly plan on following up on our conversation in more detail, once I get a good night’s sleep.
Understood. Looking forward to hearing your thoughts when you’re ready :-)
Having looked at your diagram now, that’s not quite what I have in mind. For instance, “what I attempt to implement” is kinda an “innards” issue rather than deserving a separate box in this context.
Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you’re doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I’m going to have to come up with some other notation to represent this.
Before we continue… do you have any objection to me making a top level posting for this (drawing out an attempt to diagram what I have in mind and so on?) frankly, even if my solution is complete nonsense, I really do think that this problem is an issue that needs to be dealt with as a larger issue.
Begun working on the diagram, still thinking out though exact way to draw it. I’ll probably have to use a crude hack of simply showing lots of surgery points and basically saying “do surgery at each of these one at a time, weighing the outcome by the probability that that’s the one you’re actually effectively operating on” (This will (hopefully) make more sense in the larger post)
Grr! That was my first suggestion!
Not that weird, actually. I think you can do that by building a probabilistic twin network. See the good Pearl summary, slide 26. Instead of using it for a counterfactual, surgically set a different node in each subnetwork, and also the probabilities coming from the common parent (U in slide 26) to represent the probability of each subnetwork being the right one. Then use all terminal nodes across both subnetworks as the outcome set for calculating probability.
Though I guess that amounts to what you were planning anyway. Another way might be to use multiple dependent exogenous variables that capture the effect of cutting one edge when you thought you were cutting another.
No problem, just make sure to link this discussion.
*clicks first link*
And I said that was more or less right, didn’t I? ie, “what I attempt to implement” ~= “innards”, which points to “selector”/”output”, which selects what actually gets used.
Looking through the second link (ie, the slides) now
Okay, I think there are some terminological issues to sort out here, resulting from our divergence from AnnaSalamon’s original terminology.
The discussion I thought we were having corresponds to the CSA’s calculation of “woulds”. And when you calculate a would, you surgically set the output of the node, which means cutting the links to its parents.
Is this where we are? Are you saying the “would” should be calculated from surgery on the “algorithm selector” node (which points to “choice of box”)? Because in that case, the links to “algorithm selector” from “algorithm space” and “innards” are cut, which d-separates them. (ETA: to clarify: d-separates “box choice” from Omega and its descendants.)
OTOH, even if you follow my suggestion and do surgery on “innards”, the connection between “box choice” and “omega’s prediction” is only a weak link—algorithm space is huge.
Perhaps you also want an arrow from “algorithm selector” to “omega’s prediction” (you don’t need a separate node for “Omega’s model of your selector” because it chains). Then, the possible difference between the box choice and omega’s prediction emerges from the independent error term pointing to box choice (which accounts for cosmic rays, hardware errors, etc.) There is a separate (implicit) “error parent” for the “Omega’s prediction” node, which accounts for shortcomings of Omega’s model.
This preserves d-connection (between box choice and box content) after a surgery on “algorithm selector”. Is that what you’re aiming for?
(Causal Bayes nets are kinda fun!)