For those of us not well schooled in these matters, can you explain or link to why it’s a crazy idea? I am intuitively sympathetic to Yudkowsky, but emulations first doesn’t seem obviously crazy.
I’ve been trying to put together a survey paper about why uploads coming first is not at all crazy. Whether it’s more likely than local-super-AI-in-a-basement, I don’t know and leave that to the experts. But brain emulation, as far as we currently understand it, is very similar to the problem posed by landing on the moon circa 1962 -- a matter of scaling it up (though we certainly have much more than a decade to go for mind emulation).
Ken Hayworth, now working in the Lichtman connectomics lab at Harvard, has recently written up such a survey paper to appear in the International Journal of Machine Consciousness this coming summer. I interviewed him and reviewed a preprint of his paper, comparing it with some counter-arguments from Ian Parberry and Paul Allen.
You can find my paper here. I would very much appreciate comments or suggestions if you have them. I’m already planning another section on compression and sensitivity issues with this much data, and also perhaps on ethical questions surrounding the first uploads. This was for a mid-term project for this course, so the scope (due to time constraints, etc.) couldn’t be as large or well-addressed as I had hoped.
For those of us not well schooled in these matters, can you explain or link to why it’s a crazy idea? I am intuitively sympathetic to Yudkowsky, but emulations first doesn’t seem obviously crazy.
My last attempt at that: Against Whole Brain Emulation.
Anissimov had a recent post on the topic of how few computing machines are bioinspired.
As I say, it looks more like a public relations exercise than anything else. The human face of machine intelligence. Yeah, right.
I’ve been trying to put together a survey paper about why uploads coming first is not at all crazy. Whether it’s more likely than local-super-AI-in-a-basement, I don’t know and leave that to the experts. But brain emulation, as far as we currently understand it, is very similar to the problem posed by landing on the moon circa 1962 -- a matter of scaling it up (though we certainly have much more than a decade to go for mind emulation).
Ken Hayworth, now working in the Lichtman connectomics lab at Harvard, has recently written up such a survey paper to appear in the International Journal of Machine Consciousness this coming summer. I interviewed him and reviewed a preprint of his paper, comparing it with some counter-arguments from Ian Parberry and Paul Allen.
You can find my paper here. I would very much appreciate comments or suggestions if you have them. I’m already planning another section on compression and sensitivity issues with this much data, and also perhaps on ethical questions surrounding the first uploads. This was for a mid-term project for this course, so the scope (due to time constraints, etc.) couldn’t be as large or well-addressed as I had hoped.