I believe rhollerith. I met him the other week and talked in some detail; he strikes me as someone who’s actually trying. Also, he shared the intellectual roots of his moral position, and the roots make sense as part of a life-story that involves being strongly influenced by John David Garcia’s apparently similar moral system some time ago.
Hollerith doesn’t mean he was applying his moral position to AI design since ’92, he means that since ’92, he’s been following out a possible theory of value that doesn’t assign intrinsic value to human life, to human happiness, or to similar subjective states. I’m not sure why people are stating their disbelief.
Good point, Anna: John David Garcia did not work in AI or apply his system of values to the AI problem, but his system of values yields fairly unambiguous recommendations when applied to the AI problem—much more unambiguous than human-centered ways of valuing things.
I believe rhollerith. I met him the other week and talked in some detail; he strikes me as someone who’s actually trying. Also, he shared the intellectual roots of his moral position, and the roots make sense as part of a life-story that involves being strongly influenced by John David Garcia’s apparently similar moral system some time ago.
Hollerith doesn’t mean he was applying his moral position to AI design since ’92, he means that since ’92, he’s been following out a possible theory of value that doesn’t assign intrinsic value to human life, to human happiness, or to similar subjective states. I’m not sure why people are stating their disbelief.
Good point, Anna: John David Garcia did not work in AI or apply his system of values to the AI problem, but his system of values yields fairly unambiguous recommendations when applied to the AI problem—much more unambiguous than human-centered ways of valuing things.
Off-topic until May, all.