A Solomonoff inductor has no concept of betting and certainly can’t decide whether to take a bet.
Assume that we do this the obvious way.
If you like, we can also talk about the number of times that the probability estimate differs from 0.5 by some epsilon.
(The obvious way is “if EV(bet)>0, consider the bet taken.”)
The book An Introduction to Kolmogorov Complexity and Its Applications by Ming Li and Paul Vitanyi
Thanks for the recommendation!
if the data string is generated by a computable probability distribution (e.g. a fair coin), then the Solomonoff inductor payoff rate will asymptotically (and quickly) converge to or above the expected payoff rate of the best computable stochastic agent.
What shows up in the most obvious place—pg 328 of the book—is that the mean squared error decreases faster than 1/n. And in fact the proof method isn’t strong enough to show that the absolute error decreases faster than 1/n (because the derivative of the absolute error jumps discontinuously).
There’s still the possibility that we can “cheat” by allowing our agent to take bets with positive expected value. Taking bets only cares about the absolute error.
But still, maybe there are stronger bounds out there (maybe even later in the same book) that would apply to the absolute error. And I’m definitely not disputing that I’m ignorant :)
(The obvious way is “if EV(bet)>0, consider the bet taken.”)
I don’t think its really obvious. The (retroactively) obvious way to extend SI into a decision agent, is AIXI, which has its own optimality theorems.
What shows up in the most obvious place—pg 328 of the book—is that the mean squared error decreases faster than 1/n. And in fact the proof method isn’t strong enough to show that the absolute error decreases faster than 1/n (because the derivative of the absolute error jumps discontinuously).
The absolute error is just the square root of the squared error. If the squared error decreases faster than 1/n, then the absolute error decreases faster than 1/sqrt(n).
Assume that we do this the obvious way.
If you like, we can also talk about the number of times that the probability estimate differs from 0.5 by some epsilon.
(The obvious way is “if EV(bet)>0, consider the bet taken.”)
Thanks for the recommendation!
What shows up in the most obvious place—pg 328 of the book—is that the mean squared error decreases faster than 1/n. And in fact the proof method isn’t strong enough to show that the absolute error decreases faster than 1/n (because the derivative of the absolute error jumps discontinuously).
There’s still the possibility that we can “cheat” by allowing our agent to take bets with positive expected value. Taking bets only cares about the absolute error.
But still, maybe there are stronger bounds out there (maybe even later in the same book) that would apply to the absolute error. And I’m definitely not disputing that I’m ignorant :)
I don’t think its really obvious. The (retroactively) obvious way to extend SI into a decision agent, is AIXI, which has its own optimality theorems.
The absolute error is just the square root of the squared error. If the squared error decreases faster than 1/n, then the absolute error decreases faster than 1/sqrt(n).
The point being that if your expected absolute error decreases like 1/n or slower, you make an infinite number of wrong bets.
You keep framing this in terms of bets. I don’t think there is any point in continuing this discussion.