Because in the preceding comment, I was demonstrating that we should not morally care about light switches, search engines, and paperclippers...whereas we should morally care about fishes, dogs, and humans… because of differences in the preference profiles of these beings when they are modeled as agents.
Peter Hurford disagreed with me on the non-moral status of the paper-clipper. I was demonstrating the non-moral status of a being which cared only for paper clips by analogy to a search engine (a being which only cares about bringing up the best search result).
Whereas what Lumifer was saying is that the very premise that a search engine could have choices was fundamentally flawed (which, if true, would cause the whole analogy to break down).
The thing is, it’s not fundamentally flawed to thing of a search engine as having choices. Sure, search engines are a little less usefully modeled as agent-like when compared to humans, but it’s just a matter of degree.
the input-output function of a human as a “choice” vs. the input-output of a machine as “not-a-choice”
I was objecting to his hard, qualitative binary, not your and Dennet’s soft/qualitative spectrum.
Because in the preceding comment, I was demonstrating that we should not morally care about light switches, search engines, and paperclippers...whereas we should morally care about fishes, dogs, and humans… because of differences in the preference profiles of these beings when they are modeled as agents.
Peter Hurford disagreed with me on the non-moral status of the paper-clipper. I was demonstrating the non-moral status of a being which cared only for paper clips by analogy to a search engine (a being which only cares about bringing up the best search result).
Whereas what Lumifer was saying is that the very premise that a search engine could have choices was fundamentally flawed (which, if true, would cause the whole analogy to break down).
The thing is, it’s not fundamentally flawed to thing of a search engine as having choices. Sure, search engines are a little less usefully modeled as agent-like when compared to humans, but it’s just a matter of degree.
I was objecting to his hard, qualitative binary, not your and Dennet’s soft/qualitative spectrum.
Thanks for clarifying.