This seems like a generic problem with User Interfaces in general. Making them too smart while they are still able to “make errors that they don’t know that they’re making” is a recipe for a bad user experience.
If you’re going to have a layer of disintermediation between what happens under the hood and what the user requests, it should either be super super tight (so that the request ALWAYS causes what is desired) or else it should have the capacity to notice fuzzy or unrealizable expressions of intent and and initiate repair on the communicative intent.
Maybe in the mid 2020s things will get better, but in 2018:
...observing users struggle with the AI interfaces felt like a return to the dark ages of the 1970s: the need to memorize cryptic commands, oppressive modes, confusing content, inflexible interactions — basically an unpleasant user experience.
This seems like a generic problem with User Interfaces in general. Making them too smart while they are still able to “make errors that they don’t know that they’re making” is a recipe for a bad user experience.
If you’re going to have a layer of disintermediation between what happens under the hood and what the user requests, it should either be super super tight (so that the request ALWAYS causes what is desired) or else it should have the capacity to notice fuzzy or unrealizable expressions of intent and and initiate repair on the communicative intent.
Maybe in the mid 2020s things will get better, but in 2018: