You keep repeating how much information could an AI derive from a very small measurement (the original example was an apple falling) and the last story was supposed to be an analogy to it, but the idea of an entire civilization worth of physical evidence already available makes the job of the AI much easier. The original assertion of deriving modern physics from an apple falling looks ridiculous because you never specified the prior knowledge the AI had and amount of non-redundant information available in the falling apple scenario. If we are rigorous enough with the definitions we end up with a measure of how efficiently can an intelligence observe new information from a certain piece of evidence and how efficiently it can update it’s own theories in the face of this new evidence. I agree that a self improving AI could reach the theoretical limits of efficiency on updating its own theories, but the efficiency of information observation from an experiment is more related to what the experiment is measuring and the resolution of the measurements. The assertion that an AI could see an apple falling and theorize general relativity is meaningless without saying how much prior knowledge it has, in a tabula rasa state almost nothing could come from this observation, it would need much more evidence before anything meaningful started to arise. The resolution of the evidence is also very important, it’s absurd to believe that there aren’t local maxima in the theory space search that would be favored by the AI because the resolution isn’t sufficient to show that the theories are dead wrong. The AI would have no way to accurately assess this impact (if we assume it’s unable to improve the resolution and manipulate the environment). That’s it, the essence of what I think is wrong with your belief about how much an AI could learn from certain forms of evidence: I agree with the idea but your reasoning is much less formal than it should be and it end up looking like magical thinking. With sufficient resolution and a good set of sensors there’s sufficient evidence today to believe that an AI could use a very small amount of orthogonal experiments to derive all of modern science, I would bet the actual amount is smaller than one hundred, but if the resolution is insufficient no amount of experiments would do.
You keep repeating how much information could an AI derive from a very small measurement (the original example was an apple falling) and the last story was supposed to be an analogy to it, but the idea of an entire civilization worth of physical evidence already available makes the job of the AI much easier. The original assertion of deriving modern physics from an apple falling looks ridiculous because you never specified the prior knowledge the AI had and amount of non-redundant information available in the falling apple scenario. If we are rigorous enough with the definitions we end up with a measure of how efficiently can an intelligence observe new information from a certain piece of evidence and how efficiently it can update it’s own theories in the face of this new evidence. I agree that a self improving AI could reach the theoretical limits of efficiency on updating its own theories, but the efficiency of information observation from an experiment is more related to what the experiment is measuring and the resolution of the measurements. The assertion that an AI could see an apple falling and theorize general relativity is meaningless without saying how much prior knowledge it has, in a tabula rasa state almost nothing could come from this observation, it would need much more evidence before anything meaningful started to arise. The resolution of the evidence is also very important, it’s absurd to believe that there aren’t local maxima in the theory space search that would be favored by the AI because the resolution isn’t sufficient to show that the theories are dead wrong. The AI would have no way to accurately assess this impact (if we assume it’s unable to improve the resolution and manipulate the environment). That’s it, the essence of what I think is wrong with your belief about how much an AI could learn from certain forms of evidence: I agree with the idea but your reasoning is much less formal than it should be and it end up looking like magical thinking. With sufficient resolution and a good set of sensors there’s sufficient evidence today to believe that an AI could use a very small amount of orthogonal experiments to derive all of modern science, I would bet the actual amount is smaller than one hundred, but if the resolution is insufficient no amount of experiments would do.