Ok, if what you’re saying is not “SI concludes this” but just that we don’t really know what even the theoretical SI concludes, then I don’t disagree with that and in fact have made similar points before. (See here and here.) I guess I give Eliezer and Luke more of a pass (i.e. don’t criticize them heavily based on this) because it doesn’t seem like any other proponent of algorithmic information theory (for example Schmidhuber or Hutter) realizes that Solomonoff Induction may not assign most posterior probability mass to “physics sim + location” type programs, or if they do realize it, choose not to point it out. That presentation you linked to earlier is a good example of this.
I believe that Hutter et all were rightfully careful not to expect something specific, i.e. not to expect it to not kill him, not to expect it to kill him, etc etc.
You would think that if Hutter thought there’s a significant chance that AIXI would kill him, he would point that out prominently so people would prioritize working on this problem or at least keep it in mind as they try to build AIXI approximations. But instead he immediately encourages people to use AIXI as a model to build AIs (in A Monte Carlo AIXI Approximation for example) without mentioning any potential dangers.
Those are questions to be, at last, formally approached.
Before you formally approach a problem (by that I assume you mean try to formally prove it one way or another), you have to think that the problem is important enough. How can we decide that, except by using intuition and heuristic/informal arguments? And in this case it seems likely that a proof would be too hard to do (AIXI is uncomputable after all) so intuition and heuristic/informal arguments may be the only things we’re left with.
Ok, if what you’re saying is not “SI concludes this” but just that we don’t really know what even the theoretical SI concludes, then I don’t disagree with that and in fact have made similar points before. (See here and here.) I guess I give Eliezer and Luke more of a pass (i.e. don’t criticize them heavily based on this) because it doesn’t seem like any other proponent of algorithmic information theory (for example Schmidhuber or Hutter) realizes that Solomonoff Induction may not assign most posterior probability mass to “physics sim + location” type programs, or if they do realize it, choose not to point it out. That presentation you linked to earlier is a good example of this.
You would think that if Hutter thought there’s a significant chance that AIXI would kill him, he would point that out prominently so people would prioritize working on this problem or at least keep it in mind as they try to build AIXI approximations. But instead he immediately encourages people to use AIXI as a model to build AIs (in A Monte Carlo AIXI Approximation for example) without mentioning any potential dangers.
Before you formally approach a problem (by that I assume you mean try to formally prove it one way or another), you have to think that the problem is important enough. How can we decide that, except by using intuition and heuristic/informal arguments? And in this case it seems likely that a proof would be too hard to do (AIXI is uncomputable after all) so intuition and heuristic/informal arguments may be the only things we’re left with.