Let X be a long bitstring. Suppose you run a small Turing machine T, and it eventually outputs X. (No small turing machine outputs X quickly)
Either X has low komelgorov complexity.
Or X has a high Komelgorov complexity, but the universe runs in a nonstandard model where T halts. Hence the value of X is encoded into the universe by the nonstandard model. Hence I should do a baysian update about the laws of physics, and expect that X is likely to show up in other places. (low conditional complexity)
These two options are different views on the same thing.
Looks like the problem of abiogenesis, that boils down to the problem of creation of the first string of RNA capable to self-replicate, which is estimated to be at least 100 pairs.
I have no idea what you are thinking. Either you have some brilliant insight I have yet to grasp, or you have totally misunderstood. By “string” I mean abstract mathematical strings of symbols.
There are two views of the problem of abiogenesis of life on Earth:
a) our universe is just simple generator of random strings of RNA via billions of billions planets and it randomly generate the string capable to self-replication which was at the beginning of life. The minimum length of such string is 40-100 bits. It was estimated that 10^80 Hubble volumes is needed for such random generation.
b) Our universe is adapted to generate strings which are more capable to self-replication. It was discussed in the comment to this post.
This looks similar to what you described: (a) is a situation of the universe of low Kolmogorov complexity, which just brut force life. (b) is the universe with higher Kolmogorov complexity of physical laws, which however is more effective in generating self-replicating strings. The Kolmogorov complexity of such string is very high.
A quote from the abstract of the paper linked in (a)
A polymer longer than 40–100 nucleotides is necessary to expect a self-replicating activity, but the formation of such a long polymer having a correct nucleotide sequence by random reactions seems statistically unlikely.
Lets say that no string of nucleotides of length < 1000 could self replicate. And that 10% of nucleotide strings of length >2000 could. Life would form readily.
The “seems unlikely” appears to come from the assumption that correct nucleotide sequences are very rare.
What evidence do we have about what proportion of nucleotide sequences can self replicate?
Well it is rare enough that it hasn’t happened in a jar of chemicals over a weekend. It happened at least once on earth, although there are anthropic selection effects ascociated with that. The great filter could be something else. It seems to have only happened once on earth, although one could have beaten the others in Darwinian selection.
We can estimate apriori probability that some sequence will work at all by taking a random working protein and comparing its with all other possible strings of the same length. I think this probability will be very small.
I that this probability is small, but I am claiming it could be 1 in a trillion small, not 1 in 10^50 small.
How do you intend to test 10^30 protiens for self replication ability? The best we can do is to mix up a vat of random protiens, and leave it in suitable conditions to see if something replicates. Then sample the vat to see if its full of self replicators. Our vat has less mass, and exists for less time, than the surface of prebiotic earth. (Assuming near present levels of resources, some K3 civ might well try planetary scale biology experiments) So there is a range of probabilities where we won’t see abiogenisis in a vat, but it is likely to happen on a planet.
We can make a test on computer viruses. What is the probability that a random code will be self-replicating program? 10^50 probability is not that extraordinary—it is just a probability of around 150 bits of code being on right places.
Or X has a high Komelgorov complexity, but the universe runs in a nonstandard model where T halts.
Disclaimer: I barely know anything about nonstandard models, so I might be wrong. I think this means that T halts after the amount of steps equal to a nonstandard natural number, which comes after all standard natural numbers. So, how would you see that it “eventually” outputs X? Even trying to imagine this is too bizarre.
Since non-standard natural numbers come after standard natural numbers, I will also have noticed that I’ve already lived for an infinite amount of time, so I’ll know something fishy is going on.
The problem is that nonstandard numbers behave like standard numbers from the inside.
Nonstandard numbers still have decimal representations, just the number of digits is nonstandard. They have prime factors, and some of them are prime.
We can look at it from the outside and say that its infinite, but from within, they behave just like very large finite numbers. In fact there is no formula in first order arithmatic, with 1 free variable, that is true on all standard numbers, and false on all nonstandard numbers.
Let X be a long bitstring. Suppose you run a small Turing machine T, and it eventually outputs X. (No small turing machine outputs X quickly)
Either X has low komelgorov complexity.
Or X has a high Komelgorov complexity, but the universe runs in a nonstandard model where T halts. Hence the value of X is encoded into the universe by the nonstandard model. Hence I should do a baysian update about the laws of physics, and expect that X is likely to show up in other places. (low conditional complexity)
These two options are different views on the same thing.
Looks like the problem of abiogenesis, that boils down to the problem of creation of the first string of RNA capable to self-replicate, which is estimated to be at least 100 pairs.
I have no idea what you are thinking. Either you have some brilliant insight I have yet to grasp, or you have totally misunderstood. By “string” I mean abstract mathematical strings of symbols.
Ok. will try to explain the analogy:
There are two views of the problem of abiogenesis of life on Earth:
a) our universe is just simple generator of random strings of RNA via billions of billions planets and it randomly generate the string capable to self-replication which was at the beginning of life. The minimum length of such string is 40-100 bits. It was estimated that 10^80 Hubble volumes is needed for such random generation.
b) Our universe is adapted to generate strings which are more capable to self-replication. It was discussed in the comment to this post.
This looks similar to what you described: (a) is a situation of the universe of low Kolmogorov complexity, which just brut force life. (b) is the universe with higher Kolmogorov complexity of physical laws, which however is more effective in generating self-replicating strings. The Kolmogorov complexity of such string is very high.
A quote from the abstract of the paper linked in (a)
Lets say that no string of nucleotides of length < 1000 could self replicate. And that 10% of nucleotide strings of length >2000 could. Life would form readily.
The “seems unlikely” appears to come from the assumption that correct nucleotide sequences are very rare.
What evidence do we have about what proportion of nucleotide sequences can self replicate?
Well it is rare enough that it hasn’t happened in a jar of chemicals over a weekend. It happened at least once on earth, although there are anthropic selection effects ascociated with that. The great filter could be something else. It seems to have only happened once on earth, although one could have beaten the others in Darwinian selection.
We can estimate apriori probability that some sequence will work at all by taking a random working protein and comparing its with all other possible strings of the same length. I think this probability will be very small.
I that this probability is small, but I am claiming it could be 1 in a trillion small, not 1 in 10^50 small.
How do you intend to test 10^30 protiens for self replication ability? The best we can do is to mix up a vat of random protiens, and leave it in suitable conditions to see if something replicates. Then sample the vat to see if its full of self replicators. Our vat has less mass, and exists for less time, than the surface of prebiotic earth. (Assuming near present levels of resources, some K3 civ might well try planetary scale biology experiments) So there is a range of probabilities where we won’t see abiogenisis in a vat, but it is likely to happen on a planet.
We can make a test on computer viruses. What is the probability that a random code will be self-replicating program? 10^50 probability is not that extraordinary—it is just a probability of around 150 bits of code being on right places.
Disclaimer: I barely know anything about nonstandard models, so I might be wrong. I think this means that T halts after the amount of steps equal to a nonstandard natural number, which comes after all standard natural numbers. So, how would you see that it “eventually” outputs X? Even trying to imagine this is too bizarre.
You have the Turing machine next to you, you have seen it halt. What you are unsure about is if the current time is standard or non-standard.
Since non-standard natural numbers come after standard natural numbers, I will also have noticed that I’ve already lived for an infinite amount of time, so I’ll know something fishy is going on.
The problem is that nonstandard numbers behave like standard numbers from the inside.
Nonstandard numbers still have decimal representations, just the number of digits is nonstandard. They have prime factors, and some of them are prime.
We can look at it from the outside and say that its infinite, but from within, they behave just like very large finite numbers. In fact there is no formula in first order arithmatic, with 1 free variable, that is true on all standard numbers, and false on all nonstandard numbers.
In what sense is a disconnected number line “after” the one with the zero on it?
In the sense that every nonstandard natural number is greater than every standard natural number.