They are informative, but not because narrow AI systems are comparable to superintelligent AGIs. It’s because the developers, researchers, promoters, and funders of narrow AI systems are comparable to those of putative superintelligent AGIs. The details of Tay’s technology aren’t the most interesting thing here, but rather the group that manages it and the group(s) that will likely be involved in AGI development.
Sure, but he point stands: failures of nattow AI systems aren’t informative about likely faulures of superintelligent AGIs.
They are informative, but not because narrow AI systems are comparable to superintelligent AGIs. It’s because the developers, researchers, promoters, and funders of narrow AI systems are comparable to those of putative superintelligent AGIs. The details of Tay’s technology aren’t the most interesting thing here, but rather the group that manages it and the group(s) that will likely be involved in AGI development.
That’s a very good point.
Though one would hope that the level of effort put into AGI safety will be significantly more than what they put into twitter bot safety...
One would hope! Maybe the Tay episode can serve as a cautionary example, in that respect.
Clearly that didn’t happen.