These “Whole Brain Emulation” discussions are surreal for me. I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.
The efforts in that direction I have witnessed so far seem feeble and difficult to take seriously—while the case that engineered machine intelligence will come first seems very powerful to me.
Without such a case, why spend so much time and energy on a discussion of what-if?
Personally I don’t have a strong opinion on which will come first, both seem entirely plausible to me.
We have a much better idea how difficult WBE is than how difficult engineering a human level machine intelligence is. We don’t even know for sure whether the latter is even possible for us (other than by pure trial and error).
There is a reasonably obvious path form where we are to WBE, while we aren’t even sure how to go about learning how to engineer an intelligence, it’s entirely possible that “start with studying WBEs in detail” is the best possible way.
There are currently a lot more people studying things that are required for WBE than there are people studying AGI, it’s difficult to tell which other fields of study would benefit AGI more strongly than WBE.
Why do you consider the possibility of smarter than Human AI at all? The difference between the AI we have now and that is bigger than the difference between those 2 technologies you are comparing.
I don’t understand why you are bothering asking your question—but to give a literal answer, my interest in synthesising intelligent agents is an offshoot of my interest in creating living things—which is an interest I have had for a long time and share with many others. Machine intelligence is obviously possible—assuming you have a materialist and naturalist world-view like mine.
I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.
I misunderstood. I thought you were saying it was your goal to prove that instead of you thought it would not be proven. My question does not make sense.
These “Whole Brain Emulation” discussions are surreal for me. I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.
The efforts in that direction I have witnessed so far seem feeble and difficult to take seriously—while the case that engineered machine intelligence will come first seems very powerful to me.
Without such a case, why spend so much time and energy on a discussion of what-if?
Personally I don’t have a strong opinion on which will come first, both seem entirely plausible to me.
We have a much better idea how difficult WBE is than how difficult engineering a human level machine intelligence is. We don’t even know for sure whether the latter is even possible for us (other than by pure trial and error).
There is a reasonably obvious path form where we are to WBE, while we aren’t even sure how to go about learning how to engineer an intelligence, it’s entirely possible that “start with studying WBEs in detail” is the best possible way.
There are currently a lot more people studying things that are required for WBE than there are people studying AGI, it’s difficult to tell which other fields of study would benefit AGI more strongly than WBE.
Why do you consider the possibility of smarter than Human AI at all? The difference between the AI we have now and that is bigger than the difference between those 2 technologies you are comparing.
I don’t understand why you are bothering asking your question—but to give a literal answer, my interest in synthesising intelligent agents is an offshoot of my interest in creating living things—which is an interest I have had for a long time and share with many others. Machine intelligence is obviously possible—assuming you have a materialist and naturalist world-view like mine.
I misunderstood. I thought you were saying it was your goal to prove that instead of you thought it would not be proven. My question does not make sense.
Thanks for clarifying!