It had much in common with your approach. After his talk, a lot of people in the audience, including myself, were shaking their heads in dismay at Selmer’s apparent ignorance of everything in AI since 1985. Richard got up and schooled him hard, in his usual undiplomatic way, in the many reasons why his approach was hopeless.
Which are?
(Not asking for a complete and thorough reproduction, which I realize is outside the scope of a comment, just some pointers or an abridged version. Mostly I wonder which arguments you lend the most credence to.)
Edit: Having read the discussion on “nothing is mere”, I retract my question. There’s such a thing as arguments disqualifying someone from any further discourse in a given topic:
As a result, the machine is able to state, quite categorically, that it will now do something that it KNOWS to be inconsistent with its past behavior, that it KNOWS to be the result of a design flaw, that it KNOWS will have drastic consequences of the sort that it has always made the greatest effort to avoid, and that it KNOWS could be avoided by the simple expedient of turning itself off to allow for a small operating system update ………… and yet in spite of knowing all these things, and confessing quite openly to the logical incoherence of saying one thing and doing another, it is going to go right ahead and follow this bizarre consequence in its programming.
… yes? Unless the ghost in the machine saves it … from itself!
Which are?
(Not asking for a complete and thorough reproduction, which I realize is outside the scope of a comment, just some pointers or an abridged version. Mostly I wonder which arguments you lend the most credence to.)
Edit: Having read the discussion on “nothing is mere”, I retract my question. There’s such a thing as arguments disqualifying someone from any further discourse in a given topic:
… yes? Unless the ghost in the machine saves it … from itself!