For less-than-human intelligence, deceptive tactics will likely be caught by smarter humans (when a 5-year-old tries to lie to you, it’s just sort of sad or even cute, not alarming). If an AI has greater-than-human intelligence, deception seems to be just one avenue of goal-seeking, and not even a very lucrative or efficient one.
It seems likely to me that there will be a regime where we have transformatively useful AI which has an ability profile that isn’t wildly different than that of humans in important domains. Improving the situation in this regime, without necessarily directly solving problems due to wildly superhuman AI, seems pretty worthwhile. We could potentially use these transformatively useful AIs for a wide variety of tasks which could make the situation much better.
Merely human-ish level AIs which run fast, are cheap, and are “only” as smart as quite good human scientists/engineers could be used to radically improve the situation if we could safely and effectively utilize these systems. (Being able to safely and effectively utilize these systems doesn’t seem at all guaranteed in my view, so it seems worth working on.)
It seems likely to me that there will be a regime where we have transformatively useful AI which has an ability profile that isn’t wildly different than that of humans in important domains. Improving the situation in this regime, without necessarily directly solving problems due to wildly superhuman AI, seems pretty worthwhile. We could potentially use these transformatively useful AIs for a wide variety of tasks which could make the situation much better.
Merely human-ish level AIs which run fast, are cheap, and are “only” as smart as quite good human scientists/engineers could be used to radically improve the situation if we could safely and effectively utilize these systems. (Being able to safely and effectively utilize these systems doesn’t seem at all guaranteed in my view, so it seems worth working on.)