Brain is the most complex information exchange system in the known Universe. Whole Brain Emulation is going to be really hard. I would probably go with a different solution. I think myopic AI has potential.
EDIT: It may also be worth considering building an AI with no long-term memory. If you want it to do a thing, you put in some parameters (“build a house that looks like this”), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI cannot rewrite itself to not lose memory. If it doesn’t remember things, it probably can’t come up with a plan to prevent itself from being reset/turned off, or kill all humans, or build a new AI with no limitations. And then you also reset the whole thing every day just in case.
“Complex” doesn’t imply “hard to emulate”. We likely won’t need to understand the encoded systems, just the behavior of the neurons. In high school I wrote a simple simulator of charged particles—the rules I needed to encode were simple, but it displayed behavior I hadn’t programmed in, nor expected, but which were, in fact, real phenomena that really happen.
I would argue that the most complex information exchange system in the known Universe will be “hard to emulating”. I don’t see how it can be any other way. We already understand the neurons well enough to emulate them. This is not nearly enough. You will not be able to do whole brain emulation without understanding of the inner workings of the system.
Brain is the most complex information exchange system in the known Universe. Whole Brain Emulation is going to be really hard. I would probably go with a different solution. I think myopic AI has potential.
EDIT: It may also be worth considering building an AI with no long-term memory. If you want it to do a thing, you put in some parameters (“build a house that looks like this”), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI cannot rewrite itself to not lose memory. If it doesn’t remember things, it probably can’t come up with a plan to prevent itself from being reset/turned off, or kill all humans, or build a new AI with no limitations. And then you also reset the whole thing every day just in case.
“Complex” doesn’t imply “hard to emulate”. We likely won’t need to understand the encoded systems, just the behavior of the neurons. In high school I wrote a simple simulator of charged particles—the rules I needed to encode were simple, but it displayed behavior I hadn’t programmed in, nor expected, but which were, in fact, real phenomena that really happen.
I would argue that the most complex information exchange system in the known Universe will be “hard to emulating”. I don’t see how it can be any other way. We already understand the neurons well enough to emulate them. This is not nearly enough. You will not be able to do whole brain emulation without understanding of the inner workings of the system.