I think this sums it up well. To my understanding, I think it would only require someone “looking over its shoulder”, asking its specific objective for each drug and the expected results of the drug. I doubt a “limited intelligence” would be able to lie. That is, unless it somehow mutated/accidentally became a more general AI, but then we’ve jumped rails into a different problem.
It’s possible that I’m paying too much attention to your example, and not enough attention to your general point. I guess the moral of the story is, though, “limited AI can still be dangerous if you don’t take proper precautions”, or “incautiously coded objectives can be just as dangerous in limited AI as in general AI”. Which I agree with, and is a good point.
I think this sums it up well. To my understanding, I think it would only require someone “looking over its shoulder”, asking its specific objective for each drug and the expected results of the drug. I doubt a “limited intelligence” would be able to lie. That is, unless it somehow mutated/accidentally became a more general AI, but then we’ve jumped rails into a different problem.
It’s possible that I’m paying too much attention to your example, and not enough attention to your general point. I guess the moral of the story is, though, “limited AI can still be dangerous if you don’t take proper precautions”, or “incautiously coded objectives can be just as dangerous in limited AI as in general AI”. Which I agree with, and is a good point.