No, no, a thousand times no. This is a huge step backwards for UI. This is taking us back to the UI of old-school text adventure games, where I have to guess what specific words the inscrutable interpreter is looking for in order to do what I want it to do.
I do not want the system to “do what I mean”, I want the system to do exactly what I tell it to do. In practice, every system that attempts to “do what the user means” ends up becoming an extremely janky command-line system with syntax that makes tcsh look sane.
Instead of constantly chasing new interaction paradigms like novelty-addicted squirrels, I would much rather UI designers and developers spend time improving the performance and organization of existing systems.
The AI understands what you mean by your words, rather than you having to put thought into what keywords to use, since it understands natural language.
This sounds like it would be “natural” to use, but it would not be, because translating intention into language is cognitively effortful, and very unnatural for a very wide array of action types.
I often do not think in words about what I want to do, or want done. Indeed I often don’t think about doing the thing at all, I just do it, and insofar as there’s cognition to be done, it’s done as part of the action, transparently.
Having to translate everything into words would dramatically narrow the cognitive bandwidth between me and the effects I can accomplish with my various technological tools.
A lot of people, including me, sometimes think in words, and otherwise can effortlessly translate, so I don’t think it’s a rule that people would have to think about it too much.
Eventually, I think, everyone would acclimatize, and instead of effortlessly doing the thing, they would learn to effortlessly command the AI.
It’s an interesting point I hadn’t considered before.
Edit: I also like how both our comments are correctness-strong-downvoted by a single person, yet we more or less contradict each other. Oh, well.
Edit: I also like how both our comments are correctness-strong-downvoted by a single person, yet we more or less contradict each other. Oh, well.
Well, while it’s unlikely that we’re both right, so long as our views are not literally logical negations of each other it is surely possible for us to both be wrong…
In addition to what Said Achmiz wrote, I would also add that an AI that unerringly knows what I mean is a superhuman intelligence.
People have to clarify their instructions to other people all the time, and in a non-trivial number of instances, the person giving the instruction gets frustrated and says something to the effect of, “It would be faster if I’d just done it myself.”
No, no, a thousand times no. This is a huge step backwards for UI. This is taking us back to the UI of old-school text adventure games, where I have to guess what specific words the inscrutable interpreter is looking for in order to do what I want it to do.
I do not want the system to “do what I mean”, I want the system to do exactly what I tell it to do. In practice, every system that attempts to “do what the user means” ends up becoming an extremely janky command-line system with syntax that makes tcsh look sane.
Instead of constantly chasing new interaction paradigms like novelty-addicted squirrels, I would much rather UI designers and developers spend time improving the performance and organization of existing systems.
The AI understands what you mean by your words, rather than you having to put thought into what keywords to use, since it understands natural language.
This sounds like it would be “natural” to use, but it would not be, because translating intention into language is cognitively effortful, and very unnatural for a very wide array of action types.
I often do not think in words about what I want to do, or want done. Indeed I often don’t think about doing the thing at all, I just do it, and insofar as there’s cognition to be done, it’s done as part of the action, transparently.
Having to translate everything into words would dramatically narrow the cognitive bandwidth between me and the effects I can accomplish with my various technological tools.
A lot of people, including me, sometimes think in words, and otherwise can effortlessly translate, so I don’t think it’s a rule that people would have to think about it too much.
Eventually, I think, everyone would acclimatize, and instead of effortlessly doing the thing, they would learn to effortlessly command the AI.
It’s an interesting point I hadn’t considered before.
Edit: I also like how both our comments are correctness-strong-downvoted by a single person, yet we more or less contradict each other. Oh, well.
Well, while it’s unlikely that we’re both right, so long as our views are not literally logical negations of each other it is surely possible for us to both be wrong…
In addition to what Said Achmiz wrote, I would also add that an AI that unerringly knows what I mean is a superhuman intelligence.
People have to clarify their instructions to other people all the time, and in a non-trivial number of instances, the person giving the instruction gets frustrated and says something to the effect of, “It would be faster if I’d just done it myself.”
That’s definitely true.