It seems to me that the real issue is rational weighing of reference classes when using multiple models. I want to assign them weights so that they form a good ensemble to build my forecasting distribution from, and these weights should ideally reflect my prior of them being relevant and good, model complexity, and perhaps that their biases are countered by other reference classes. In the computationally best of all possible world I go down the branching rabbit hole and also make probabilistic estimates of the weights. I could also wing it.
The problem is that the set of potential reference classes appears to be badly defined. The Tesla case potentially involves all possible subsets of stocks (2^N) over all possible time intervals (2^NT), but as the dictator case shows there is also potentially an unbounded set of other facts that might be included in selecting the reference classes. That means that we should be suspicious about having well-formed priors over the reference class set.
When I have some sensible reference classes pop up in my mind and I select from them I am doing naturalistic decision making where past experience gates availability. So while I should weigh their results together, I should be aware that they are biased in this way. I should broaden my model uncertainty for the weighing accordingly. But how much I broaden it depends on how large I allow the considerable set of potential reference classes to be, a separate meta-prior.
We do not assume mirrors. As you say, there are big limits due to conservation of etendué. We are assuming (if I remember right) photovoltaic conversion into electricity and/or microwave beams received by rectennas. Now, all that conversion back and forth induces losses, but they are not orders of magnitude large.
In the years since we wrote that paper I have become much more fond of solar thermal conversion (use the whole spectrum rather than just part of it), and lightweight statite-style foil Dyson swarms rather than heavier collectors. The solar thermal conversion doesn’t change things much (but allows for a more clean-cut analysis of entropy and efficiency; see Badescu’s work). The statite style however reduces the material requirements many orders of magnitude: Mercury is safe, I only need the biggest asteroids.
Still, detailed modelling of the actual raw material conversion process would be nice. My main headache is not so much the energy input/waste heat removal (although they are by no means trivial and may slow things down for too concentrated mining operations—another reason to do it in the asteroid belt in many places), but how to solve the operations management problem of how many units of machine X to build at time t. Would love to do this in more detail!