However, even if you did know the source code, you might still be ignorant about what it would do.
The Halting Problem.
As a simple example, suppose I violate the axiom that P(Heads)+P(Not Heads)=1 by having P(Not Heads)=P(Heads)=13. Given my stated probabilities, I think a 2:1 bet that the coin is Heads is fair and a 2:1 bet that the coin is Not Heads is fair; this combination of bets that is guaranteed to lose me $1, making me Dutch-bookable.
It’s not clear why you would think that bet is fair.
Solomonoff induction is an example of an ideal empirical induction [process].
4. The astute reader may notice that Brower’s fixed point theorem is non-constructive. We find the fixed point by brute force searching over all rational numbers. See [5.1.2] for details. ↩︎
Interesting.
Solomonoff induction uses all computable worlds as its experts; however, the underlying logic (Bayesian updating) is more general than that. Instead of using all computable worlds, we can instead use all polynomials, decision trees, or functions representable by a one billion parameter neural network. Of course, these reduced forms of Solomonoff induction would not work as well, but the method of induction would remain unchanged.
Similarly, Garrabrant induction employs polynomial-time traders as its experts; however, the underlying logic of trading and markets is more general than that. Instead of using polynomial time traders, we can instead use linear-time traders, constant time traders, or traders representable by a one billion parameter neural network. Of course, these reduced forms of Garrabrant induction would not work as well, but the method of induction would remain unchanged.
Why would Garrabrant induction be better than Garrabrant induction with neural networks?
Re neural networks: All one billion parameter networks should be computable in polynomial time, but there exist functions that are not expressible by a one billion parameter network (perhaps unless you allow for an arbitrary choice of nonlinearity)
1.
The Halting Problem.
It’s not clear why you would think that bet is fair.
Interesting.
Why would Garrabrant induction be better than Garrabrant induction with neural networks?
Re neural networks: All one billion parameter networks should be computable in polynomial time, but there exist functions that are not expressible by a one billion parameter network (perhaps unless you allow for an arbitrary choice of nonlinearity)