I actually think this is reasonably relevant, and is related to treeification.
I think any combination of {rewriting, using some canonical form} and {treeification, no treeification} is at least possible, and they all seem sort of reasonable. Do you mean the relation is that both rewriting and treeification give you more expressiveness/more precise hypotheses? If so, I agree for treeification, not sure for rewriting. If we allow literally arbitrary extensional rewrites, then that does increase the number of different hypotheses we can make, but these hypotheses can’t be understood as making precise claims about the original computation anymore. I could even see an argument that allowing rewrites in some sense always makes hypotheses less precise, but I feel pretty confused about what rewrites even are given that there might be no canonical topology for the original computation.
there might be no canonical topology for the original computation
This sounds right to me, and overall I mostly think of treeification as just a kind of extensional rewrite (plus adding more inputs).
these hypotheses can’t be understood as making precise claims about the original computation anymore
I think of the underlying graph as providing some combination of 1) causal relationships, and 2) smaller pieces to help with search/reasoning, rather than being an object we inherently care about. (It’s possibly useful to think of hypotheses more as making predictions about the behaviorbut idk.)
I do agree that in some applications you might want to restrict which rewrites (including treeification!) are allowed. e.g., in MAD for ELK we might want to make use of the fact that there is a single “diamond” (which may be ~distributed, but not ~duplicated) upstream of all the sensors.
Thanks! Mostly agree with your comments.
I think any combination of {rewriting, using some canonical form} and {treeification, no treeification} is at least possible, and they all seem sort of reasonable. Do you mean the relation is that both rewriting and treeification give you more expressiveness/more precise hypotheses? If so, I agree for treeification, not sure for rewriting. If we allow literally arbitrary extensional rewrites, then that does increase the number of different hypotheses we can make, but these hypotheses can’t be understood as making precise claims about the original computation anymore. I could even see an argument that allowing rewrites in some sense always makes hypotheses less precise, but I feel pretty confused about what rewrites even are given that there might be no canonical topology for the original computation.
Not sure if I’m fully responding to your q but...
This sounds right to me, and overall I mostly think of treeification as just a kind of extensional rewrite (plus adding more inputs).
I think of the underlying graph as providing some combination of 1) causal relationships, and 2) smaller pieces to help with search/reasoning, rather than being an object we inherently care about. (It’s possibly useful to think of hypotheses more as making predictions about the behavior but idk.)
I do agree that in some applications you might want to restrict which rewrites (including treeification!) are allowed. e.g., in MAD for ELK we might want to make use of the fact that there is a single “diamond” (which may be ~distributed, but not ~duplicated) upstream of all the sensors.