Feedback systems are much more powerful in existing intelligences. I don’t know if you ever played Black and White but it had an explicitly learning through experience based AI. And it was very easy to accidentally train it to constantly eat poop or run back and forth stupidly. An elevator control module is very very simple: It has a set of options of floors to go to, and that’s it. It’s barely capable of doing anything actively bad. But what if a few days a week some kids had come into the office building and rode the elevator up and down for a few hours for fun? It might learn that kids love going to all sorts of random floors. This would be relatively easy to fix, but only because the system is so insanely simple and it’s very clear to see when it’s acting up.
Feedback systems are much more powerful in existing intelligences. I don’t know if you ever played Black and White but it had an explicitly learning through experience based AI. And it was very easy to accidentally train it to constantly eat poop or run back and forth stupidly. An elevator control module is very very simple: It has a set of options of floors to go to, and that’s it. It’s barely capable of doing anything actively bad. But what if a few days a week some kids had come into the office building and rode the elevator up and down for a few hours for fun? It might learn that kids love going to all sorts of random floors. This would be relatively easy to fix, but only because the system is so insanely simple and it’s very clear to see when it’s acting up.