I agree with most of what you said, except for this:
Yes, or more sneakily, impossible to implement due to a hidden reliance on human techniques for which there is as-yet no known algorithmic implementation.
Firstly, this is an argument for studying “human techniques”, and devising algorithmic implementations, and not an argument for abandoning these techniques. Assuming the techniques are demonstrated to work reliably, of course.
Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.
Firstly, this is an argument for studying “human techniques”, and devising algorithmic implementations, and not an argument for abandoning these techniques.
Indeed, I should have been more specific; not all processes used in AI need to be analogous to humans, of course. All I meant was that it is very easy, when trying to provide a complete spec of a human process, to accidentally lean on other human mental processes that seem on zeroth-glance to be “obvious”. It’s hard to spot those mistakes without an outside view.
Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.
To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques, since they’re all likely to be non-locally-cohesive and heavily interdependent.
It’s hard to spot those mistakes without an outside view.
Right, that makes sense.
To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques...
True, but I wasn’t thinking of using an uploaded mind to extract and study those ideas, but simply to plug the mind into your overall architecture and treat it like a black box that gives you the right answers, somehow. It’s a poor solution, but it’s better than nothing—assuming that the Singularity is imminent and we’re all about to be nano-recycled into quantum computronium, unless we manage to turn the AI into an FAI in the next 72 hours.
I agree with most of what you said, except for this:
Firstly, this is an argument for studying “human techniques”, and devising algorithmic implementations, and not an argument for abandoning these techniques. Assuming the techniques are demonstrated to work reliably, of course.
Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.
Indeed, I should have been more specific; not all processes used in AI need to be analogous to humans, of course. All I meant was that it is very easy, when trying to provide a complete spec of a human process, to accidentally lean on other human mental processes that seem on zeroth-glance to be “obvious”. It’s hard to spot those mistakes without an outside view.
To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques, since they’re all likely to be non-locally-cohesive and heavily interdependent.
Right, that makes sense.
True, but I wasn’t thinking of using an uploaded mind to extract and study those ideas, but simply to plug the mind into your overall architecture and treat it like a black box that gives you the right answers, somehow. It’s a poor solution, but it’s better than nothing—assuming that the Singularity is imminent and we’re all about to be nano-recycled into quantum computronium, unless we manage to turn the AI into an FAI in the next 72 hours.