That’s an interesting approach, but I’m not really sure it’s what Luke’s after. He seems to be talking about closer to Knightian uncertainty and out-of-sample error; given a specific model of AI risk over time, I suppose you could figure out how many bits you receive per time period and calculate such a number, but I think Luke is asking a question more like ‘how much reliability do I have that this model is capturing anything meaningful about the real dynamics? are the results being driven by one particular assumption or some small unreliable set of datapoints? Is this set of predictions just overfitting?’ One of his points:
There are differences in model uncertainty between the three cases. I know what model to use when predicting a coin flip. My method of predicting whether Matt will show up at a party is shakier, but I have some idea of what I’m doing. With the Strong AI case, I don’t really have any good idea of what I’m doing. Presumably model uncertainty is related to estimate stability, because the more model uncertainty we have, the more we can change our estimate by reducing our model uncertainty.
It’s always a concern in Bayesian reasoning whether you’re using a sensible prior. Theoretically you should always start with the Solomonoff prior and update from there but implementing it in practice is difficult, to say the least.
However, if we wish to stay in the realm of mathematical formalism (I think Knightian uncertainty lies outside of it by definition?) then the parameter I suggested is sensible. In particular the relationship between model uncertainty and estimate stability is well-captured by this parameter.
For example suppose you have three possible models of strong AI development M1, M2 and M3 and you have a meta-model which assigns them probabilities p1, p2 and p3. Then your probability distribution is the convex linear combination of the probability distributions assigned by M1, M2 and M3 with coefficients p1, p2 and p3. Now, if during time period t you expect to learn which of these models is the right one then my parameter will show the resulting “unstability”.
Theoretically you should always start with the Solomonoff prior and update from there but implementing it in practice is difficult, to say the least.
Yes, but you can check your models in a variety of ways. You can test your inferred results from your dataset by doing bootstrapping or cross-validation, and see how often your result changed (coefficients or estimation accuracy etc). To step up a level, you can set parameters in your model to differing values based on hyperparameters, and see how each of the variants on the model performs on the data (and then you can bootstrap/cross-validate each of the possible models as well), and then see how sensitive your results are to specific parameters like, yes, whatever priors you were feeding in. You can have families of models, like pitting logistic regression models against random forests, and you can see how often they differ as another form of sensitivity (and then you can vary the hyperparameters in each model and then bootstrap/cross-validate with each possible model). You can have ensembles of models from various families and obviously vary which models are picked and what weights are put on them… and there my knowledge peters out.
But while you still would not have come close to what a Solomonoff approach might do, you have still learned a great deal about your model’s reliability in a way that I can’t see as having any connection with your time and KL-related approach.
But while you still would not have come close to what a Solomonoff approach might do, you have still learned a great deal about your model’s reliability in a way that I can’t see as having any connection with your time and KL-related approach.
I think there is a connection. Namely, the methods you mentioned are possible mechanisms of a learning process but
]) is a quantification of the expected impact of this learning process.
Yes, I see what you mean—the mean/expectation of how big the divergence between our current probability distribution and the future probability distribution—but this seems like a post hoc or purely descriptive approach: how do we estimate how much divergence there may be?
Having gotten estimates of future divergence, quantifying the divergence may then be useful, but it seems like putting the horse before the cart to start with your measure.
That’s an interesting approach, but I’m not really sure it’s what Luke’s after. He seems to be talking about closer to Knightian uncertainty and out-of-sample error; given a specific model of AI risk over time, I suppose you could figure out how many bits you receive per time period and calculate such a number, but I think Luke is asking a question more like ‘how much reliability do I have that this model is capturing anything meaningful about the real dynamics? are the results being driven by one particular assumption or some small unreliable set of datapoints? Is this set of predictions just overfitting?’ One of his points:
It’s always a concern in Bayesian reasoning whether you’re using a sensible prior. Theoretically you should always start with the Solomonoff prior and update from there but implementing it in practice is difficult, to say the least. However, if we wish to stay in the realm of mathematical formalism (I think Knightian uncertainty lies outside of it by definition?) then the parameter I suggested is sensible. In particular the relationship between model uncertainty and estimate stability is well-captured by this parameter. For example suppose you have three possible models of strong AI development M1, M2 and M3 and you have a meta-model which assigns them probabilities p1, p2 and p3. Then your probability distribution is the convex linear combination of the probability distributions assigned by M1, M2 and M3 with coefficients p1, p2 and p3. Now, if during time period t you expect to learn which of these models is the right one then my parameter will show the resulting “unstability”.
Yes, but you can check your models in a variety of ways. You can test your inferred results from your dataset by doing bootstrapping or cross-validation, and see how often your result changed (coefficients or estimation accuracy etc). To step up a level, you can set parameters in your model to differing values based on hyperparameters, and see how each of the variants on the model performs on the data (and then you can bootstrap/cross-validate each of the possible models as well), and then see how sensitive your results are to specific parameters like, yes, whatever priors you were feeding in. You can have families of models, like pitting logistic regression models against random forests, and you can see how often they differ as another form of sensitivity (and then you can vary the hyperparameters in each model and then bootstrap/cross-validate with each possible model). You can have ensembles of models from various families and obviously vary which models are picked and what weights are put on them… and there my knowledge peters out.
But while you still would not have come close to what a Solomonoff approach might do, you have still learned a great deal about your model’s reliability in a way that I can’t see as having any connection with your time and KL-related approach.
I think there is a connection. Namely, the methods you mentioned are possible mechanisms of a learning process but
]) is a quantification of the expected impact of this learning process.Yes, I see what you mean—the mean/expectation of how big the divergence between our current probability distribution and the future probability distribution—but this seems like a post hoc or purely descriptive approach: how do we estimate how much divergence there may be?
Having gotten estimates of future divergence, quantifying the divergence may then be useful, but it seems like putting the horse before the cart to start with your measure.