Suppose this program, along with the inputs that cause it to output the description of a given human brain, is what makes the largest contribution to the probability mass of the bitstring representing that brain in the Solomonoff Prior.
More specifically, to replace my previous summary comment: the above statement sounds kind-a redeemable, but it’s so vague and common-sensually absurd that I think it makes a negative contribution. Things like this need to be said clearly, or not at all. It invites all sorts of kookery, not just with the format of presentation, but in own mind as well.
Huh, that’s a surprising response. I thought that at least the intended meaning would be obvious for someone familiar with the Solomonoff Prior. I guess “vague” I can address by making my claim mathematically precise, but why “common-sensually absurd”?
I was hoping that it would trigger an insight in someone who might solve this mystery for me. As I said, I’m not sure how to develop it into a full answer myself (but it might be related to this other vague/possibly-absurd idea).
Perhaps I’m abusing this community by presenting ideas that are half-formed and “epistemically unhygienic”, but I expect that’s not a serious danger. It seems like a promising direction to explore, that I don’t see anyone else exploring (kind of like UDT until recently). I have too many questions I’d like to see answered, and not enough time and ability to answer them all myself.
More specifically, to replace my previous summary comment: the above statement sounds kind-a redeemable, but it’s so vague and common-sensually absurd that I think it makes a negative contribution. Things like this need to be said clearly, or not at all. It invites all sorts of kookery, not just with the format of presentation, but in own mind as well.
Huh, that’s a surprising response. I thought that at least the intended meaning would be obvious for someone familiar with the Solomonoff Prior. I guess “vague” I can address by making my claim mathematically precise, but why “common-sensually absurd”?
Re absurd: It’s not clear why you would say something like the quote.
I was hoping that it would trigger an insight in someone who might solve this mystery for me. As I said, I’m not sure how to develop it into a full answer myself (but it might be related to this other vague/possibly-absurd idea).
Perhaps I’m abusing this community by presenting ideas that are half-formed and “epistemically unhygienic”, but I expect that’s not a serious danger. It seems like a promising direction to explore, that I don’t see anyone else exploring (kind of like UDT until recently). I have too many questions I’d like to see answered, and not enough time and ability to answer them all myself.