Since the algorithm can be compressed well (it fits into a human brain), and since that form of the algorithm takes its input a few bits at a time (and not a day’s worth in a single go), it seems likely that a fully static representation can also be highly compressed and would not need to take the full 2^(10^7) bits. Especially so if you allow the algorithm to be slightly imprecise in its output.
Jacob was drastically oversimplifying, because the algorithm (assuming we restrict ourselves to responses to visual stimuli) does not convert one retinal image to some particular, constant output; a conscious being would never respond in the same way to the same image all the time.
Instead, it converts one input brain state plus one retinal image to one output brain state, and brain states consist of a similarly enormous amount of information.
Since the algorithm can be compressed well (it fits into a human brain), and since that form of the algorithm takes its input a few bits at a time (and not a day’s worth in a single go), it seems likely that a fully static representation can also be highly compressed and would not need to take the full 2^(10^7) bits. Especially so if you allow the algorithm to be slightly imprecise in its output.
Jacob was drastically oversimplifying, because the algorithm (assuming we restrict ourselves to responses to visual stimuli) does not convert one retinal image to some particular, constant output; a conscious being would never respond in the same way to the same image all the time.
Instead, it converts one input brain state plus one retinal image to one output brain state, and brain states consist of a similarly enormous amount of information.
Perhaps the difference between succeeding brain states, induced by visual input, isn’t all that enormous.