The last time I saw the phrase ‘artificial stupidity’, it referred to people designing chatbots to have an advantage on ‘Turing Tests’ by making common spelling mistakes consistent with QWERTY keyboards, and the judges, when judging between a bot and someone who had good spelling, all else being equal, figured the bot was human. On the other hand, I could also see this being something a ‘smart’ AI might do.
limitations are deliberately introduced
Today this is is a part of better computer chess—coming up with algorithms that can compete with each other at, using less and less resources. (Last I heard they can beat the best human players while running on phones.)
I’ve also seen a lot of criticism directed at google for running tests or ‘competitions’ with very questionable, and arbitrary limitations in order to give an advantage to their program which they want to look good, such as with AlphaZero.
Yes, typing mistakes in Turing Test is an example. It’s “artificially stupid” in the sense that you go from a perfect typing to a human imperfect typing.
I guess what you mean by “smart” is an AGI that would creatively make those typing mistakes to deceive humans into believing it is human, instead of some hardcoded feature in a Turing contest.
The last time I saw the phrase ‘artificial stupidity’, it referred to people designing chatbots to have an advantage on ‘Turing Tests’ by making common spelling mistakes consistent with QWERTY keyboards, and the judges, when judging between a bot and someone who had good spelling, all else being equal, figured the bot was human. On the other hand, I could also see this being something a ‘smart’ AI might do.
Today this is is a part of better computer chess—coming up with algorithms that can compete with each other at, using less and less resources. (Last I heard they can beat the best human players while running on phones.)
I’ve also seen a lot of criticism directed at google for running tests or ‘competitions’ with very questionable, and arbitrary limitations in order to give an advantage to their program which they want to look good, such as with AlphaZero.
Yes, typing mistakes in Turing Test is an example. It’s “artificially stupid” in the sense that you go from a perfect typing to a human imperfect typing. I guess what you mean by “smart” is an AGI that would creatively make those typing mistakes to deceive humans into believing it is human, instead of some hardcoded feature in a Turing contest.