I don’t think that weights are the right answer—not that they aren’t better than nothing, but as the Tesla case shows, the actual answer is having a useful model with which to apply reference classes. For example, once you have a model of stock prices as random walks, the useful priors are over the volatility rather than price, or rather, the difference between implied options volatility and post-hoc realized volatility for the stock, and other similar stocks. (And if your model is stochastic volatility with jumps, you want priors over the inputs to that.) At that point, you can usefully use the reference classes, and which one to use isn’t nearly as critical.
In general, I strongly expect that in “difficult” domains, causal understanding combined with outside view and reference classes will outperform simply using “better” reference classes naively.
I don’t think that weights are the right answer—not that they aren’t better than nothing, but as the Tesla case shows, the actual answer is having a useful model with which to apply reference classes. For example, once you have a model of stock prices as random walks, the useful priors are over the volatility rather than price, or rather, the difference between implied options volatility and post-hoc realized volatility for the stock, and other similar stocks. (And if your model is stochastic volatility with jumps, you want priors over the inputs to that.) At that point, you can usefully use the reference classes, and which one to use isn’t nearly as critical.
In general, I strongly expect that in “difficult” domains, causal understanding combined with outside view and reference classes will outperform simply using “better” reference classes naively.