There are lots of tipoffs to what is fictional and what is real. It might notice for example the Wikipedia article on fiction describes exactly what fiction is and then note that Wikipedia describes the One Ring as fiction, and that Early warning systems are not. I’m not claiming that it will necessarily have an easy time with this. But the point is that there are not that many steps here, and no single step by itself looks extremely unlikely once one has a smart entity (which frankly to my mind is the main issue here- I consider recursive self-improvement to be unlikely).
We are trapped in an endless chain here. The computer would still somehow have to deduce that Wikipedia entry that describes One Ring is real, while the One Ring itself is not.
We observer that Wikipedia is mainly truthful. From that we infer “entry that describes “One Ring” is real”. From use of term fiction/story in that entry, we refer that “One Ring” is not real.
Somehow you learned that Wikipedia is mainly truthful/nonfictional and that “One Ring” is fictional. So your question/objection/doubt is really just the typical boring doubt of AGI feasibility in general.
But even humans have trouble with this sometimes. I was recently reading the Wikipedia article Hornblower and the Crisis which contains a link to the article on Francisco de Miranda. It took me time and cues when I clicked on it to realize that de Miranda was a historical figure.
So your question/objection/doubt is really just the typical boring doubt of AGI feasibility in general.
Isn’t Kalla’s objection more a claim that fast takeovers won’t happen because even with all this data, the problems of understanding humans and our basic cultural norms will take a long time for the AI to learn and that in the meantime we’ll develop a detailed understanding of it, and it is that hostile it is likely to make obvious mistakes in the meantime?
Why would the AI be mucking around on Wikipedia to sort truth from falsehood, when Wikipedia itself has been criticized for various errors and is fundamentally vulnerable to vandalism? Primary sources are where it’s at. Looking through the text of The Hobbit and Lord of the Rings, it’s presented as a historical account, translated by a respected professor, with extensive footnotes. There’s a lot of cultural context necessary to tell the difference.
There are lots of tipoffs to what is fictional and what is real. It might notice for example the Wikipedia article on fiction describes exactly what fiction is and then note that Wikipedia describes the One Ring as fiction, and that Early warning systems are not. I’m not claiming that it will necessarily have an easy time with this. But the point is that there are not that many steps here, and no single step by itself looks extremely unlikely once one has a smart entity (which frankly to my mind is the main issue here- I consider recursive self-improvement to be unlikely).
We are trapped in an endless chain here. The computer would still somehow have to deduce that Wikipedia entry that describes One Ring is real, while the One Ring itself is not.
We observer that Wikipedia is mainly truthful. From that we infer “entry that describes “One Ring” is real”. From use of term fiction/story in that entry, we refer that “One Ring” is not real.
Somehow you learned that Wikipedia is mainly truthful/nonfictional and that “One Ring” is fictional. So your question/objection/doubt is really just the typical boring doubt of AGI feasibility in general.
But even humans have trouble with this sometimes. I was recently reading the Wikipedia article Hornblower and the Crisis which contains a link to the article on Francisco de Miranda. It took me time and cues when I clicked on it to realize that de Miranda was a historical figure.
Isn’t Kalla’s objection more a claim that fast takeovers won’t happen because even with all this data, the problems of understanding humans and our basic cultural norms will take a long time for the AI to learn and that in the meantime we’ll develop a detailed understanding of it, and it is that hostile it is likely to make obvious mistakes in the meantime?
Why would the AI be mucking around on Wikipedia to sort truth from falsehood, when Wikipedia itself has been criticized for various errors and is fundamentally vulnerable to vandalism? Primary sources are where it’s at. Looking through the text of The Hobbit and Lord of the Rings, it’s presented as a historical account, translated by a respected professor, with extensive footnotes. There’s a lot of cultural context necessary to tell the difference.