Solomonoff induction is extraordinarily unhelpful, I think… that it is uncomputable is only one reason.
Because it’s output is not human-readable being the other?
I mean, even if I’ve got a TARDIS to use as a halting oracle, an Inductive Turing Machine isn’t going to output something I can actually use to make predictions about specific events such as “The black box gives you money under X, Y, and Z circumstances.”
Well, the problem I was thinking of is “the universe is not a bit string.” And any unbiased representation we can make of the universe as a bit string is going to be extremely large—much too large to do even sane sorts of computation with, never mind Solomonoff.
Maybe that’s saying the same thing you did? I’m not sure...
Can you please give us a top level post at some point, be it in Discussion or Main, arguing that “the universe is not a bit string”? I find that very interesting, relevant, and plausible.
Going back to the basic question about the black box:
What is the probability of its giving you $2?
Too small to be worth considering. I might as well ask, what’s the probability that I’ll find $2 hidden half way up the nearest tree? Nothing has been claimed about the black box to specifically draw “it will pay you $2 for $1” out of hypothesis space.
Hmm… given that the previous several boxes have either paid $2 or done nothing, it seems like that primes the hypothesis that the next in the series also pays $2 or does nothing. (I’m not actually disagreeing, but doesn’t that argument seem reasonable?)
it seems like that primes the hypothesis that the next in the series also pays $2 or does nothing
Priming a hypothesis merely draws it to attention; it does not make it more likely. Every piece of spam, every con game, “primes the hypothesis” that it is genuine. It also “primes the hypothesis” that it is not. “Priming the hypothesis” is no more evidence than a purple giraffe is evidence of the blackness of crows.
Explicltly avoiding saying that it does pay $2, and saying instead that it is “interesting”, well, that pretty much stomps the “priming” into a stain on the sidewalk.
Well, yes. As is the mere presence of the idea of $2 for $1 terrible evidence that the black box will do any such thing.
Eliezer speaks in the Twelve Virtues of letting oneself be as light as a leaf, blown unresistingly by the wind of evidence, but evidence of this sort is on the level of the individual molecules and Brownian motion of that leaf.
Yes, I’m not at all committed to the metaprobability approach. In fact, I concocted the black box example specifically to show its limitations!
Solomonoff induction is extraordinarily unhelpful, I think… that it is uncomputable is only one reason.
I think there’s a fairly simple and straightforward strategy to address the black box problem, which has not been mentioned so far...
Because it’s output is not human-readable being the other?
I mean, even if I’ve got a TARDIS to use as a halting oracle, an Inductive Turing Machine isn’t going to output something I can actually use to make predictions about specific events such as “The black box gives you money under X, Y, and Z circumstances.”
Well, the problem I was thinking of is “the universe is not a bit string.” And any unbiased representation we can make of the universe as a bit string is going to be extremely large—much too large to do even sane sorts of computation with, never mind Solomonoff.
Maybe that’s saying the same thing you did? I’m not sure...
Can you please give us a top level post at some point, be it in Discussion or Main, arguing that “the universe is not a bit string”? I find that very interesting, relevant, and plausible.
Thanks for the encouragement! I have way too many half-completed writing projects, but this does seem an important point.
Going back to the basic question about the black box:
Too small to be worth considering. I might as well ask, what’s the probability that I’ll find $2 hidden half way up the nearest tree? Nothing has been claimed about the black box to specifically draw “it will pay you $2 for $1” out of hypothesis space.
Hmm… given that the previous several boxes have either paid $2 or done nothing, it seems like that primes the hypothesis that the next in the series also pays $2 or does nothing. (I’m not actually disagreeing, but doesn’t that argument seem reasonable?)
Priming a hypothesis merely draws it to attention; it does not make it more likely. Every piece of spam, every con game, “primes the hypothesis” that it is genuine. It also “primes the hypothesis” that it is not. “Priming the hypothesis” is no more evidence than a purple giraffe is evidence of the blackness of crows.
Explicltly avoiding saying that it does pay $2, and saying instead that it is “interesting”, well, that pretty much stomps the “priming” into a stain on the sidewalk.
.… purple giraffes are evidence of the blackness of crows, though. Just, really really terrible evidence.
Well, yes. As is the mere presence of the idea of $2 for $1 terrible evidence that the black box will do any such thing.
Eliezer speaks in the Twelve Virtues of letting oneself be as light as a leaf, blown unresistingly by the wind of evidence, but evidence of this sort is on the level of the individual molecules and Brownian motion of that leaf.
It depends on your priors