I would challenge your friend on two premises that are pretty clearly false:
1) You need to fully understand a human brain in order to create intelligence. This is no more true than the statement “you need to fully understand microsoft windows in order to create an operating system” or “you need to fully understand the internal combustion engine in order to create an automobile.”
2) A human brain needs to fully understand some form intelligence at all in order to create intelligence. This is false because the human brain very rarely fully understands something using just resources within itself. How many times have you used a pen-and-paper in order solve a math problem? The human brain just needs to be able to understand the intelligence in pieces well enough to write down equations that it can tackle one-at-a-time or to write useful computer programs. This is related to your co-brain idea, though this is more general—in your argument, the co-brain is another conscious entity while in my argument, it could be a bunch of tools, or maybe tools + more conscious entities.
Thanks! I presented him with these arguments as well, but they are more familiar on LW and so I didn’t see the utility of posting them here. The above argument felt more constructive in the mathematical sense. (Although my friend is still not convinced.)
I would challenge your friend on two premises that are pretty clearly false:
1) You need to fully understand a human brain in order to create intelligence. This is no more true than the statement “you need to fully understand microsoft windows in order to create an operating system” or “you need to fully understand the internal combustion engine in order to create an automobile.”
2) A human brain needs to fully understand some form intelligence at all in order to create intelligence. This is false because the human brain very rarely fully understands something using just resources within itself. How many times have you used a pen-and-paper in order solve a math problem? The human brain just needs to be able to understand the intelligence in pieces well enough to write down equations that it can tackle one-at-a-time or to write useful computer programs. This is related to your co-brain idea, though this is more general—in your argument, the co-brain is another conscious entity while in my argument, it could be a bunch of tools, or maybe tools + more conscious entities.
Thanks! I presented him with these arguments as well, but they are more familiar on LW and so I didn’t see the utility of posting them here. The above argument felt more constructive in the mathematical sense. (Although my friend is still not convinced.)