AI systems already exist that are both smart, in that they solve complex and difficulty cognitive tasks, and dangerous, in that they make decisions on which significant value rides, and thus poor decisions are costly.
But they are not smart in the contextually relevant sense of being able to outsmart humans, or dangerous in the contextually relevant sense of being unboxable.
But they are not smart in the contextually relevant sense of being able to outsmart humans, or dangerous in the contextually relevant sense of being unboxable.