Not sure if I’m fully responding to your q but...
there might be no canonical topology for the original computation
This sounds right to me, and overall I mostly think of treeification as just a kind of extensional rewrite (plus adding more inputs).
these hypotheses can’t be understood as making precise claims about the original computation anymore
I think of the underlying graph as providing some combination of 1) causal relationships, and 2) smaller pieces to help with search/reasoning, rather than being an object we inherently care about. (It’s possibly useful to think of hypotheses more as making predictions about the behavior but idk.)
I do agree that in some applications you might want to restrict which rewrites (including treeification!) are allowed. e.g., in MAD for ELK we might want to make use of the fact that there is a single “diamond” (which may be ~distributed, but not ~duplicated) upstream of all the sensors.
I had cached impressions that AI safety people were interested in auditing, ELK, and scalable oversight.
A few AIS people who volunteered to give feedback before the workshop (so biased towards people who were interested in the title) each named a unique top choice: scientific understanding (specifically threat models), model editing, and auditing (so 2⁄3 were unexpected for me).
During the workshop, attendees (again, biased, as they self-selected into the session) expressed excitement most about auditing, unlearning, MAD, ELK, and general scientific understanding. I was surprised at the interest in MAD and ELK, I thought there would be more skepticism around those; though I can see how they might be aesthetically appealing for the slightly more academic audience.