Okay, maybe I was somewhat unfair in saying there are no results. Sill, I think it’s good to distinguish “internal results” and “external results”. Take the example of complex analysis: we have many beautiful results about complex holomorphic functions, like Cauchy’s integral formula. I call these internal results. But what made complex analysis so widely studied is that it could be used to produce some external results, like calculating the integral under the bell curve or proving the prime number theorem. These are questions that interested people even before holomorphic functions were invented, so proving them gave a legitimacy to the new complex analysis toolkit. Obviously, Cauchy’s integral formula and the like are very useful too, as we couldn’t reach the external results without understanding the toolkit itself better with the internal results. But my impression is that John was asking for an explanation of the external results, as they are more of an interest in an introductory post.
I count the work on Newcomb as an external result: “What learning process can lead to successfully learning to one-box in Newcomb’s game?” is a natural question someone might ask without hearing about infra-Bayesianism, and I think IB gives a relatively natural framework for that (although I haven’t looked into this deeply, and I don’t know exactly how natural or hacky it is). On the other hand, from the linked results, I think the 1st, 4th and 5th are definitely internal results, I don’t understand so can’t comment of the 3rd, and the 2nd is Newcomb which I acknowledge. Similarly, I think IBP itself tries to answer an external question (formalizing naturalized induction), but I’m not convinced it succeeds in that, and I think the theorems are mostly internal results, and not something I would count as an external evidence. (I know less about this, so maybe I’m missing something).
In general, I don’t deny IB has many internal results, which I acknowledge to be a necessary first step. But I think that John was looking for external results here, and in general my impression is that people seem to believe that there are more external results than there really are (did I mention the time I got a message from a group of young researchers asking if I thought “if it is currently feasible integrating multiple competing scientific theories into a single infra-Bayesian model”?) So I think it’ useful to be more clear about that we don’t have that many external results.
I partially agree, but the distinction between “internal” and “external” results is more fuzzy and complicated than you imply. Ultimately, it depends on the original problem you started with. For example, if you only care about prime numbers, then most results of complex analysis are “internal”, with the exception of results that imply something about the distribution of prime numbers. However, if complex functions are a natural way to formalize the original problem, then the same results become “external”.
In our case, the original problem is “creating a mathematical theory of intelligent agents”. (Or rather, the problem is “solving AI alignment”, or “preventing existential risk from AI”, or “creating a flourishing future for human civilization”, but let’s suppose that the path from there to “creating a mathematical theory of intelligent agents” is already clear; in any case that’s not related specifically to IB.) Infra-Bayesianism is supposed to be an actual ingredient in this theory of agents, not just some tool brought from the outside. In this sense, it already starts out as somewhat “external”.
To give a concrete example, you said that results about IB multi-armed bandits are “internal”. While I agree that these results are only useful as very simplistic toy models, they are potentially necessary steps towards stronger regret bounds in the future. At what point does it become “external”? Taking it to the extreme, I can imagine regret bounds so powerful, that they would serve as substantial evidence that an algorithm satisfying them is AGI or close to AGI. Would such a result still be “internal”?! Arguably not, because AGI algorithms are very pertinent to what we’re interested in!
You can also take the position that any result without direct applications to existing, practical, economically competitive AI systems is “internal”. In such case, I am comfortable with a research programme that only has “internal” results for a long time (although not everyone would agree). But this also doesn’t seem to be your position, since you view results about Newcombian problems as “external”.
Okay, maybe I was somewhat unfair in saying there are no results. Sill, I think it’s good to distinguish “internal results” and “external results”. Take the example of complex analysis: we have many beautiful results about complex holomorphic functions, like Cauchy’s integral formula. I call these internal results. But what made complex analysis so widely studied is that it could be used to produce some external results, like calculating the integral under the bell curve or proving the prime number theorem. These are questions that interested people even before holomorphic functions were invented, so proving them gave a legitimacy to the new complex analysis toolkit. Obviously, Cauchy’s integral formula and the like are very useful too, as we couldn’t reach the external results without understanding the toolkit itself better with the internal results. But my impression is that John was asking for an explanation of the external results, as they are more of an interest in an introductory post.
I count the work on Newcomb as an external result: “What learning process can lead to successfully learning to one-box in Newcomb’s game?” is a natural question someone might ask without hearing about infra-Bayesianism, and I think IB gives a relatively natural framework for that (although I haven’t looked into this deeply, and I don’t know exactly how natural or hacky it is). On the other hand, from the linked results, I think the 1st, 4th and 5th are definitely internal results, I don’t understand so can’t comment of the 3rd, and the 2nd is Newcomb which I acknowledge. Similarly, I think IBP itself tries to answer an external question (formalizing naturalized induction), but I’m not convinced it succeeds in that, and I think the theorems are mostly internal results, and not something I would count as an external evidence. (I know less about this, so maybe I’m missing something).
In general, I don’t deny IB has many internal results, which I acknowledge to be a necessary first step. But I think that John was looking for external results here, and in general my impression is that people seem to believe that there are more external results than there really are (did I mention the time I got a message from a group of young researchers asking if I thought “if it is currently feasible integrating multiple competing scientific theories into a single infra-Bayesian model”?) So I think it’ useful to be more clear about that we don’t have that many external results.
I partially agree, but the distinction between “internal” and “external” results is more fuzzy and complicated than you imply. Ultimately, it depends on the original problem you started with. For example, if you only care about prime numbers, then most results of complex analysis are “internal”, with the exception of results that imply something about the distribution of prime numbers. However, if complex functions are a natural way to formalize the original problem, then the same results become “external”.
In our case, the original problem is “creating a mathematical theory of intelligent agents”. (Or rather, the problem is “solving AI alignment”, or “preventing existential risk from AI”, or “creating a flourishing future for human civilization”, but let’s suppose that the path from there to “creating a mathematical theory of intelligent agents” is already clear; in any case that’s not related specifically to IB.) Infra-Bayesianism is supposed to be an actual ingredient in this theory of agents, not just some tool brought from the outside. In this sense, it already starts out as somewhat “external”.
To give a concrete example, you said that results about IB multi-armed bandits are “internal”. While I agree that these results are only useful as very simplistic toy models, they are potentially necessary steps towards stronger regret bounds in the future. At what point does it become “external”? Taking it to the extreme, I can imagine regret bounds so powerful, that they would serve as substantial evidence that an algorithm satisfying them is AGI or close to AGI. Would such a result still be “internal”?! Arguably not, because AGI algorithms are very pertinent to what we’re interested in!
You can also take the position that any result without direct applications to existing, practical, economically competitive AI systems is “internal”. In such case, I am comfortable with a research programme that only has “internal” results for a long time (although not everyone would agree). But this also doesn’t seem to be your position, since you view results about Newcombian problems as “external”.