Each element of the set is characterized by a bunch of probabilities; for example there is p_01101, which is the probability that elements x_{i+1} through x_{i+5} are 01101, for any i. I was thinking of using the topology induced by these maps (i.e. generated by preimages of open sets under them).
How is putting a noninformative prior on the reals hard? With the usual required invariance, the uniform (improper) prior does the job. I don’t mind having the prior be improper here either, and as I said I don’t know what invariance I should want; I can’t think of many interesting group actions that apply. Though of course 0 and 1 should be treated symmetrically; but that’s trivial to arrange.
I guess you’re right that regularities can be described more generally with computational models; but I expect them to be harder to deal with than this (relatively) simple, noncomputational (though stochastic) model. I’m not looking for regularities among the models, so I’m not sure how a computational model would help me.
Something about this discussion reminds me of a hilarious text:
Now having no reason to otherwise, I decided to assign each of the 64 sequences a prior probability of 1⁄64 of occurring. Now, of course, You may think otherwise but that is Your business and not My concern. (I, as a Bayesian, have a tendency to capitalise pronouns but I don’t care what You think. Strictly speaking, as a new convert to subjectivist philosophy, I don’t even care whether you are a Bayesian. In fact it is a bit of mystery as to why we Bayesians want to convert anybody. But then “We” is in any case a meaningless concept. There is only I and I don’t care whether this digression has confused You.) I then set about acquiring some experience with the coin. Now as De Finetti (vol 1 p141) points out, “experience, since experience is nothing more than the acquisition of further information—acts always and only in the way we have just described: suppressing the alternatives that turn out to be no longer possible...” (His italics)
Now of the 64 sequences, 32 end in a head. Therefore, before tossing the coin my prevision of the 6th toss was 32⁄64. I tossed the coin once and it came up heads. I thus immediately suppressed 32 alternative sequences beginning with a tail (which clearly hadn’t occurred) leaving 32 beginning with a head of which 16 ended with a head. Thus my prevision for the 6th toss was now 16⁄32. (Of course, for a single toss the number of heads can only be 0 or 1 but THINK prevision is not prediction anymore than perversion is predilection.) I then tossed the coin and it came up heads. This immediately eliminated 16 sequences, leaving 16 beginning with 2 heads, 8 of which ended in a head. My prevision of the 6th toss was thus 8⁄16. I carried on like this, obtaining a head on each of the next three goes and amending my prevision to 4⁄8, 2⁄4 and 1⁄2 which is where I then was after the 5th toss having obtained 5 heads in a row.
The moral of this story seems to be, Assume priors over generators, not over sequences. A noninformative prior over the reals will never learn that the digit after 0100 is more likely to be 1, no matter how much data you feed it.
Right, that is a good piece. But I’m afraid I was unclear. (Sorry if I was.) I’m looking for a prior over stationary sequences of digits, not just sequences. I guess the adjective “stationary” can be interpreted in two compatible ways: either I’m talking about sequences such that for every possible string w the proportion of substrings of length |w| that are equal to |w|, among all substrings of length |w|, tends to a limit as you consider more and more substrings (either extending forward or backward in the sequence); this would not quite be a prior over generators, and isn’t what I meant.
The cleaner thing I could have meant (and did) is the collection of stationary sequence-valued random variables, each of which (up to isomorphism) is completely described by the probabilities p_w of a string of length |w| coming up as w. These, then, are generators.
Janos, I spent some days parsing your request and it’s quite complex. Cosma Shalizi’s thesis and algorithm seem to address your problem in a frequentist manner, but I can’t yet work out any good Bayesian solution.
One issue with say taking a normal distribution and letting the variance go to infinity (which is the improper prior I normally use) is that the posterior distribution distribution is going to have a finite mean, which may not be a desired property of the resulting distribution.
You’re right that there’s no essential reason to relate things back to the reals, I was just using that to illustrate the difficulty.
I was thinking about this a little over the last few days and it occurred to me that one model for what you are discussing might actually be an infinite graphical model. The infinite bi-directional sequence here are the values of bernoulli-distributed random variables. Probably the most interesting case for you would be a Markov-random field, as the stochastic ‘patterns’ you were discussing may be described in terms of dependencies between random variables.
These may not quite be what you are looking for since they deal with a bound on the extent of the interactions, you probably want to think about probability distributions of binary matrices with an infinite number of rows and columns (which would correspond to an adjacency matrix over an infinite graph).
Each element of the set is characterized by a bunch of probabilities; for example there is p_01101, which is the probability that elements x_{i+1} through x_{i+5} are 01101, for any i. I was thinking of using the topology induced by these maps (i.e. generated by preimages of open sets under them).
How is putting a noninformative prior on the reals hard? With the usual required invariance, the uniform (improper) prior does the job. I don’t mind having the prior be improper here either, and as I said I don’t know what invariance I should want; I can’t think of many interesting group actions that apply. Though of course 0 and 1 should be treated symmetrically; but that’s trivial to arrange.
I guess you’re right that regularities can be described more generally with computational models; but I expect them to be harder to deal with than this (relatively) simple, noncomputational (though stochastic) model. I’m not looking for regularities among the models, so I’m not sure how a computational model would help me.
Something about this discussion reminds me of a hilarious text:
The moral of this story seems to be, Assume priors over generators, not over sequences. A noninformative prior over the reals will never learn that the digit after 0100 is more likely to be 1, no matter how much data you feed it.
Right, that is a good piece. But I’m afraid I was unclear. (Sorry if I was.) I’m looking for a prior over stationary sequences of digits, not just sequences. I guess the adjective “stationary” can be interpreted in two compatible ways: either I’m talking about sequences such that for every possible string w the proportion of substrings of length |w| that are equal to |w|, among all substrings of length |w|, tends to a limit as you consider more and more substrings (either extending forward or backward in the sequence); this would not quite be a prior over generators, and isn’t what I meant.
The cleaner thing I could have meant (and did) is the collection of stationary sequence-valued random variables, each of which (up to isomorphism) is completely described by the probabilities p_w of a string of length |w| coming up as w. These, then, are generators.
Janos, I spent some days parsing your request and it’s quite complex. Cosma Shalizi’s thesis and algorithm seem to address your problem in a frequentist manner, but I can’t yet work out any good Bayesian solution.
One issue with say taking a normal distribution and letting the variance go to infinity (which is the improper prior I normally use) is that the posterior distribution distribution is going to have a finite mean, which may not be a desired property of the resulting distribution.
You’re right that there’s no essential reason to relate things back to the reals, I was just using that to illustrate the difficulty.
I was thinking about this a little over the last few days and it occurred to me that one model for what you are discussing might actually be an infinite graphical model. The infinite bi-directional sequence here are the values of bernoulli-distributed random variables. Probably the most interesting case for you would be a Markov-random field, as the stochastic ‘patterns’ you were discussing may be described in terms of dependencies between random variables.
Here’s three papers I read a little while back on the topic (and related to) something called an Indian Buffet process: (http://www.cs.utah.edu/~hal/docs/daume08ihfrm.pdf) (http://cocosci.berkeley.edu/tom/papers/ibptr.pdf) (http://www.cs.man.ac.uk/~mtitsias/papers/nips07.pdf)
These may not quite be what you are looking for since they deal with a bound on the extent of the interactions, you probably want to think about probability distributions of binary matrices with an infinite number of rows and columns (which would correspond to an adjacency matrix over an infinite graph).