I understand your search based intelligence as a simple brute force search over all the possibilities, except that the agent already knows a bitstring from the start of the solution, so only has to consider possibilities starting with those bits. This algorithm, once given its utility function, can’t make gains by trading off time and space.
The hypothesis that humans have a large enough amount of advice built into their genetic code is almost certainly wrong. The human genome contains about 4 billion bits. Most of these bits are used to describe low level biology, not brain function. Its also unclear how evolution got access to advice bits fro spacecraft design. But even ignoring those points, imagine a 20GB pile of data produced by humans. While not perfectly optimized, the data will almost certainly be more than 50% optimized. (If you set every other letter at random, there is literally no way to fill in the remaining letters to make a good book.) If you doubt this, use a bigger pile of data, and a lower threshold. Take a 1Tb pile of text, if you set 99% of a books characters at random, there is no way to fill in the remaining 1% to make sense. This shows that humans can output more than 10Gb of optimization pressure. Being generous and counting each neuron firing, the runtime of humanity is around 10^36. This would put a hard limit of 4 billion +log_2(10^36) bits of optimization pressure that humanity could exert.
The hypothesis that humans smart search is based on our genetic code containing advice in this sense is clearly false.
You could think of the ‘advice’ given by evolution being in the form of a short program, e.g. for a neural-net-like learning algorithm. In this case, a relatively short string of advice could result in a lot of apparent optimization.
(For the book example: imagine a species that outputs books of 20Gb containing only the letter ‘a’. This is very unlikely to be produced by random choice, yet it can be specified with only a few bits of ‘advice’)
I understand your search based intelligence as a simple brute force search over all the possibilities, except that the agent already knows a bitstring from the start of the solution, so only has to consider possibilities starting with those bits. This algorithm, once given its utility function, can’t make gains by trading off time and space.
The hypothesis that humans have a large enough amount of advice built into their genetic code is almost certainly wrong. The human genome contains about 4 billion bits. Most of these bits are used to describe low level biology, not brain function. Its also unclear how evolution got access to advice bits fro spacecraft design. But even ignoring those points, imagine a 20GB pile of data produced by humans. While not perfectly optimized, the data will almost certainly be more than 50% optimized. (If you set every other letter at random, there is literally no way to fill in the remaining letters to make a good book.) If you doubt this, use a bigger pile of data, and a lower threshold. Take a 1Tb pile of text, if you set 99% of a books characters at random, there is no way to fill in the remaining 1% to make sense. This shows that humans can output more than 10Gb of optimization pressure. Being generous and counting each neuron firing, the runtime of humanity is around 10^36. This would put a hard limit of 4 billion +log_2(10^36) bits of optimization pressure that humanity could exert.
The hypothesis that humans smart search is based on our genetic code containing advice in this sense is clearly false.
You could think of the ‘advice’ given by evolution being in the form of a short program, e.g. for a neural-net-like learning algorithm. In this case, a relatively short string of advice could result in a lot of apparent optimization.
(For the book example: imagine a species that outputs books of 20Gb containing only the letter ‘a’. This is very unlikely to be produced by random choice, yet it can be specified with only a few bits of ‘advice’)