Alright, after thinking about your points some more, and refining the graph, here’s my best attempt to generate one that includes your concerns: Link.
Per AnnaSalamon’s convention, the agent’s would-node-surgery is in a square box, with the rest elliptical and the payoff octagonal. Some nodes included for clarity that would normally be left out. Dotted lines indicate edges that are cut for surgery when fixing “would” node. One link I wasn’t sure about has a ”?”, but it’s not that important.
Important points: The cutting of parents for the agent’s decision preserves d-connection between box choice and box content. Omega observes innards and attempted selection of algorithm but retains uncertainty as to how the actual algorithm plays out. Innards contribute to hardware failures to accurately implement algorithm (as do [unshown] exogenous factors).
And I do hope you follow up, given my efforts to help you spell out your point.
Just placing this here now as sort of a promise to follow up. Just that I’m running on insufficient sleep, so can only do “easy stuff” at the moment. :) I certainly plan on following up on our conversation in more detail, once I get a good night’s sleep.
Having looked at your diagram now, that’s not quite what I have in mind. For instance, “what I attempt to implement” is kinda an “innards” issue rather than deserving a separate box in this context.
Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you’re doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I’m going to have to come up with some other notation to represent this.
Before we continue… do you have any objection to me making a top level posting for this (drawing out an attempt to diagram what I have in mind and so on?) frankly, even if my solution is complete nonsense, I really do think that this problem is an issue that needs to be dealt with as a larger issue.
Begun working on the diagram, still thinking out though exact way to draw it. I’ll probably have to use a crude hack of simply showing lots of surgery points and basically saying “do surgery at each of these one at a time, weighing the outcome by the probability that that’s the one you’re actually effectively operating on” (This will (hopefully) make more sense in the larger post)
Having looked at your diagram now, that’s not quite what I have in mind. For instance, “what I attempt to implement” is kinda an “innards” issue rather than deserving a separate box in this context.
Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you’re doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I’m going to have to come up with some other notation to represent this. … I’ll probably have to use a crude hack of simply showing lots of surgery points and basically saying “do surgery at each of these one at a time, weighing the outcome by the probability that that’s the one you’re actually effectively operating on”
Not that weird, actually. I think you can do that by building a probabilistic twin network. See the good Pearl summary, slide 26. Instead of using it for a counterfactual, surgically set a different node in each subnetwork, and also the probabilities coming from the common parent (U in slide 26) to represent the probability of each subnetwork being the right one. Then use all terminal nodes across both subnetworks as the outcome set for calculating probability.
Though I guess that amounts to what you were planning anyway. Another way might be to use multiple dependent exogenous variables that capture the effect of cutting one edge when you thought you were cutting another.
Before we continue… do you have any objection to me making a top level posting for this
No problem, just make sure to link this discussion.
And I said that was more or less right, didn’t I? ie, “what I attempt to implement” ~= “innards”, which points to “selector”/”output”, which selects what actually gets used.
Looking through the second link (ie, the slides) now
Alright, after thinking about your points some more, and refining the graph, here’s my best attempt to generate one that includes your concerns: Link.
Per AnnaSalamon’s convention, the agent’s would-node-surgery is in a square box, with the rest elliptical and the payoff octagonal. Some nodes included for clarity that would normally be left out. Dotted lines indicate edges that are cut for surgery when fixing “would” node. One link I wasn’t sure about has a ”?”, but it’s not that important.
Important points: The cutting of parents for the agent’s decision preserves d-connection between box choice and box content. Omega observes innards and attempted selection of algorithm but retains uncertainty as to how the actual algorithm plays out. Innards contribute to hardware failures to accurately implement algorithm (as do [unshown] exogenous factors).
And I do hope you follow up, given my efforts to help you spell out your point.
Just placing this here now as sort of a promise to follow up. Just that I’m running on insufficient sleep, so can only do “easy stuff” at the moment. :) I certainly plan on following up on our conversation in more detail, once I get a good night’s sleep.
Understood. Looking forward to hearing your thoughts when you’re ready :-)
Having looked at your diagram now, that’s not quite what I have in mind. For instance, “what I attempt to implement” is kinda an “innards” issue rather than deserving a separate box in this context.
Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you’re doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I’m going to have to come up with some other notation to represent this.
Before we continue… do you have any objection to me making a top level posting for this (drawing out an attempt to diagram what I have in mind and so on?) frankly, even if my solution is complete nonsense, I really do think that this problem is an issue that needs to be dealt with as a larger issue.
Begun working on the diagram, still thinking out though exact way to draw it. I’ll probably have to use a crude hack of simply showing lots of surgery points and basically saying “do surgery at each of these one at a time, weighing the outcome by the probability that that’s the one you’re actually effectively operating on” (This will (hopefully) make more sense in the larger post)
Grr! That was my first suggestion!
Not that weird, actually. I think you can do that by building a probabilistic twin network. See the good Pearl summary, slide 26. Instead of using it for a counterfactual, surgically set a different node in each subnetwork, and also the probabilities coming from the common parent (U in slide 26) to represent the probability of each subnetwork being the right one. Then use all terminal nodes across both subnetworks as the outcome set for calculating probability.
Though I guess that amounts to what you were planning anyway. Another way might be to use multiple dependent exogenous variables that capture the effect of cutting one edge when you thought you were cutting another.
No problem, just make sure to link this discussion.
*clicks first link*
And I said that was more or less right, didn’t I? ie, “what I attempt to implement” ~= “innards”, which points to “selector”/”output”, which selects what actually gets used.
Looking through the second link (ie, the slides) now