I’ve considered it, though there is a tradeoff between realism, completeness and complexity.
I could code a whole Thing which aims to capture every LDSL dynamic I know of, though in that case the code would be very long and also contain factors that I need to describe in later posts in the series.
Alternatively I could simplify it, e.g. by just taking the near-final distributions without addressing more involved questions of how you end up with such distributions, though then it may seem a bit arbitrary when one can change the distributions to get different results.
Or I could let the simulation be incomplete, only showing one facet of LDSL even though other facets would be verifiably wrong in the simulated data, but then at least that facet would be fairly robust to variations in assumptions.
Obviously to some extent I can balance the approaches but I would be curious what approach most aligns with what you want to see.
Edit: maybe as an example of a complexity to consider, there’s the whole When causation does not imply correlation: robust violations of the Faithfulness axiom issue I basically haven’t discussed yet, where funky statistical dynamics appear in optimized systems. I could either explicitly simulate this despite not having written about it yet, or hard-code distributions that take this into account with no simulation, or just not have this phenomenon at all when it’s not absolutely needed for the main point.
I’ve considered it, though there is a tradeoff between realism, completeness and complexity.
I could code a whole Thing which aims to capture every LDSL dynamic I know of, though in that case the code would be very long and also contain factors that I need to describe in later posts in the series.
Alternatively I could simplify it, e.g. by just taking the near-final distributions without addressing more involved questions of how you end up with such distributions, though then it may seem a bit arbitrary when one can change the distributions to get different results.
Or I could let the simulation be incomplete, only showing one facet of LDSL even though other facets would be verifiably wrong in the simulated data, but then at least that facet would be fairly robust to variations in assumptions.
Obviously to some extent I can balance the approaches but I would be curious what approach most aligns with what you want to see.
Edit: maybe as an example of a complexity to consider, there’s the whole When causation does not imply correlation: robust violations of the Faithfulness axiom issue I basically haven’t discussed yet, where funky statistical dynamics appear in optimized systems. I could either explicitly simulate this despite not having written about it yet, or hard-code distributions that take this into account with no simulation, or just not have this phenomenon at all when it’s not absolutely needed for the main point.