I’ve completed the first full draft of the subject paper, found here, and would be most grateful for comments of any kind.
The origin is my realization it would really help to have an intellectual foundation beyond personal interest for the importance of avoiding x-risk. It is easy to say that x-risk would be bad because we humans would rather not go extinct as a matter of personal preference. It is much more powerful to say that humans have unique capabilities that AI systems will never have, and so we should be preserved because it would be a tragedy for the universe to lose those unique capabilities. My paper puts this case on a firm intellectual foundation, one I do not see being made anywhere else.
For background, I have a PhD in C.S. and the deepest of respect for the power of software, and the innovation curve. Of course, AI will soon be better than humans at almost everything. However, there is at least one thing humans will always have an edge on, due to our biological makeup and multi-hundred million year evolutionary training. And that one thing is hugely important, the very foundation of our ability to create sustainable ethical systems. And without sustainable ethical systems, grounded in an ability to appreciate the effect of action on the physical reality around us, nothing matters.
And so we have more than a personal interest in avoiding x-risk, we have a responsibility, to preserve our unique capabilities that are critically important to the continued progress of the universe. Love to hear what you think.
The Human Biological Advantage Over AI
I’ve completed the first full draft of the subject paper, found here, and would be most grateful for comments of any kind.
The origin is my realization it would really help to have an intellectual foundation beyond personal interest for the importance of avoiding x-risk. It is easy to say that x-risk would be bad because we humans would rather not go extinct as a matter of personal preference. It is much more powerful to say that humans have unique capabilities that AI systems will never have, and so we should be preserved because it would be a tragedy for the universe to lose those unique capabilities. My paper puts this case on a firm intellectual foundation, one I do not see being made anywhere else.
For background, I have a PhD in C.S. and the deepest of respect for the power of software, and the innovation curve. Of course, AI will soon be better than humans at almost everything. However, there is at least one thing humans will always have an edge on, due to our biological makeup and multi-hundred million year evolutionary training. And that one thing is hugely important, the very foundation of our ability to create sustainable ethical systems. And without sustainable ethical systems, grounded in an ability to appreciate the effect of action on the physical reality around us, nothing matters.
And so we have more than a personal interest in avoiding x-risk, we have a responsibility, to preserve our unique capabilities that are critically important to the continued progress of the universe. Love to hear what you think.