Whole brain emulation doesn’t require understanding of brain architecture, only the dynamics of its lowest-level components.
I fear the same philosophical reasoning may be applied to model neural architecture as is currently being used for econometric forecasting. Even the most complex economic models cannot account for significant exogenous variables.
For the record I think we can get to WBE, however I think a premature launch would be terrible. Based on the lack of research into developmental AI (much work notably done by a friend—Dr. Frank Guerin at Aberdeen college) I think there is a long way to go.
Granting that a brain model or WBE, would be as accurate as the biological version, why then would that not be the most efficient method? The problems with testing and implementation are the same as any other AI, if not easier because of familiarity, however it is grounded on specific biological benchmarks which at that point would be immediately identifiable.
I could go on with my particular thoughts as to why biological simulation is in my estimation a better approach, however I am more interested in why the organization (people who have been thinking longer and with more effort than myself) decided otherwise. It would seem that their collective reasoning would give a sufficiently clear and precise answer such that there would be no ambiguity.
Granting that a brain model or WBE, would be as accurate as the biological version, why then would that not be the most efficient method?
You have to instill the right preference, and just having a working improved brain doesn’t give this capability. You are trying to make an overwhelmingly powerful ally; just making something overwhelmingly powerful is a suicide. As CannibalSmith said, brains are not particularly Friendly. Read the paper.
You have to instill the right preference, and just having a working improved brain doesn’t give this capability.
Of course—we have to BUILD IT RIGHT. I couldn’t agree more. The cognitive model does not suggest a mere carbon copy of any particular brain at random, as you know it is not so limited in focus. The fantastic part about the method is at the point in which it is possible to do correctly (not simply an apparent approximation), the tools will likely be available (in the process of it being structured) to correct a large portion of what we identify as fatal cognitive errors. Any errors that are missed it stands to reason would be also missed given the same amount of time with any other developmental structure.
I am familiar with the global risk paper you linked, AI: A modern approach which addresses the issue of cognitive modeling as well as Drescher’s Good and Real and the problems associated with an FAI.
The same existential risks and potential for human disasters are inherent in all AI systems—regardless of the structure, by virtue of it’s “power.” I think one of the draws to this type of development is the fantastic responsibility which comes with it’s development, recognizing and accounting for the catastrophic results that are possible.
That said, I have yet to read a decision theoretic explication as to which structure is an optimized method of development, weighing all known limiting factors. I think AI: A modern approach comes closest to doing this but falls short in that it specifically narrows it’s focus without a thorough comparison of methods. So again, I ask, by what construct has it been determined that a logical symbolic programming approach is optimized?
I fear the same philosophical reasoning may be applied to model neural architecture as is currently being used for econometric forecasting. Even the most complex economic models cannot account for significant exogenous variables.
For the record I think we can get to WBE, however I think a premature launch would be terrible. Based on the lack of research into developmental AI (much work notably done by a friend—Dr. Frank Guerin at Aberdeen college) I think there is a long way to go.
Granting that a brain model or WBE, would be as accurate as the biological version, why then would that not be the most efficient method? The problems with testing and implementation are the same as any other AI, if not easier because of familiarity, however it is grounded on specific biological benchmarks which at that point would be immediately identifiable.
I could go on with my particular thoughts as to why biological simulation is in my estimation a better approach, however I am more interested in why the organization (people who have been thinking longer and with more effort than myself) decided otherwise. It would seem that their collective reasoning would give a sufficiently clear and precise answer such that there would be no ambiguity.
You have to instill the right preference, and just having a working improved brain doesn’t give this capability. You are trying to make an overwhelmingly powerful ally; just making something overwhelmingly powerful is a suicide. As CannibalSmith said, brains are not particularly Friendly. Read the paper.
Of course—we have to BUILD IT RIGHT. I couldn’t agree more. The cognitive model does not suggest a mere carbon copy of any particular brain at random, as you know it is not so limited in focus. The fantastic part about the method is at the point in which it is possible to do correctly (not simply an apparent approximation), the tools will likely be available (in the process of it being structured) to correct a large portion of what we identify as fatal cognitive errors. Any errors that are missed it stands to reason would be also missed given the same amount of time with any other developmental structure.
I am familiar with the global risk paper you linked, AI: A modern approach which addresses the issue of cognitive modeling as well as Drescher’s Good and Real and the problems associated with an FAI.
The same existential risks and potential for human disasters are inherent in all AI systems—regardless of the structure, by virtue of it’s “power.” I think one of the draws to this type of development is the fantastic responsibility which comes with it’s development, recognizing and accounting for the catastrophic results that are possible.
That said, I have yet to read a decision theoretic explication as to which structure is an optimized method of development, weighing all known limiting factors. I think AI: A modern approach comes closest to doing this but falls short in that it specifically narrows it’s focus without a thorough comparison of methods. So again, I ask, by what construct has it been determined that a logical symbolic programming approach is optimized?