Not if you think what Karl mentions above. The problem is that the amount of thought that you can hold in your head at one time is finite and differs significantly from one person to another.
In other words: algorithms need working memory, which is not boundless.
Well first off, I was assuming pencil and paper were allowable augmentations.
I would be surprised if it were the case that our brain process that finds big insights with N ‘bits of working memory’ couldn’t be serialized to find the same big insights as a sequence of small insights produced by a brain running a similar process but with only N/2 available ‘bits’.
Imagine yourself studying a 4 megapixel digital image only by looking at it one pixel at a time. Yes, you can look at it, and then even write down what color it was. Later you can refer back to this list and see what color a particular pixel was. Its hard to remember more than a few dozen at once though, so how will you ever have a complete picture of it in your head?
I could find and write down a set of instructions that would allow you to determine if there was a face in the image. If you were immortal and I were smarter, I could write down a set of instructions that might enable you to derive the physics of the photographed universe given a few frames.
At this level it’s like the Chinese room.
But I don’t think the ratio between Einstein’s working memory and a normal person’s working memory is 100,000 to 1.
It would be EASY to make instructions to find faces even if someone could only see and remember 1/16th of the image at a time. You get tons of image processing for free. “Is there a dark circle surrounded by a color?”
A human runnable algorithm to turn data into concepts would be different in structure, but not in kind.
Not if you think what Karl mentions above. The problem is that the amount of thought that you can hold in your head at one time is finite and differs significantly from one person to another.
In other words: algorithms need working memory, which is not boundless.
Well first off, I was assuming pencil and paper were allowable augmentations.
I would be surprised if it were the case that our brain process that finds big insights with N ‘bits of working memory’ couldn’t be serialized to find the same big insights as a sequence of small insights produced by a brain running a similar process but with only N/2 available ‘bits’.
Imagine yourself studying a 4 megapixel digital image only by looking at it one pixel at a time. Yes, you can look at it, and then even write down what color it was. Later you can refer back to this list and see what color a particular pixel was. Its hard to remember more than a few dozen at once though, so how will you ever have a complete picture of it in your head?
I could find and write down a set of instructions that would allow you to determine if there was a face in the image. If you were immortal and I were smarter, I could write down a set of instructions that might enable you to derive the physics of the photographed universe given a few frames.
At this level it’s like the Chinese room.
But I don’t think the ratio between Einstein’s working memory and a normal person’s working memory is 100,000 to 1.
It would be EASY to make instructions to find faces even if someone could only see and remember 1/16th of the image at a time. You get tons of image processing for free. “Is there a dark circle surrounded by a color?”
A human runnable algorithm to turn data into concepts would be different in structure, but not in kind.