They are informative, but not because narrow AI systems are comparable to superintelligent AGIs. It’s because the developers, researchers, promoters, and funders of narrow AI systems are comparable to those of putative superintelligent AGIs. The details of Tay’s technology aren’t the most interesting thing here, but rather the group that manages it and the group(s) that will likely be involved in AGI development.
dlarge
Maybe this is getting too far afield, but I would say that “Purpose” is not only a useful, but an essential heuristic in science when it’s being practiced by a kind of entity (like human beings) who are hard-wired to think in terms of purposeful action. Making the first question “What is this for?” brings to bear the full power of uncounted generations of field-tested behaviors, rules of thumb, and search strategies.
It is awfully important, though, not to make it the last question. I guess that’s where I’d say yes, a “full-blown concept of purpose” in the sense of an unexplained explanation, is unscientific.
Hello, everyone! I’ve been lurking for about a year and I’ve finally overcome the anxiety I encounter whenever I contemplate posting. More accurately, I’m experiencing enough influences at this very moment to feel pulled strongly to comment.
I’ve just tumbled to the fact that I may have an instinctive compulsion against the sort of signalling that’s often discussed here and by Robin Hanson. In the last several hours alone I’ve gone far out of my way to avoid signalling membership in an ingroup or adherence to a specific cohort. Is this sort of compulsion common amongst LWers? (I’m aware that declaring myself an anti-signaller runs the risk of an accusation of signalling itself but whadayagonnado.)
I’m also very interested in how pragmatism, pragmaticism, and Charles Sanders Peirce form (if at all) the philosophical underpinnings of the sort of rationality that LW centers on. It seems like Peirce doesn’t get nearly as much attention here as he should, but maybe there are good reasons for that.
One would hope! Maybe the Tay episode can serve as a cautionary example, in that respect.