This can best be seen in GUIs which despite their (often) intuitive nature cannot nearly match the ‘unlimited power’ of command line interfaces.
It depends what software you’re talking about. Here are three examples: Photoshop (2D raster image processing), Blender (3D modelling and animation), or Maya (ditto). As far as I know, none of these have command-line interfaces.[1] How would you use a command-line interface to paint a picture, or model a 3D character?
I could add Illustrator (2D object-oriented image processing) and COMSOL (finite element engineering calculations) to that list as well. GUI and API, but no CLI beyond the needs of batch processing.
[1] This needs some amplification. All of them have programming interfaces, but that is something different. Blender (and I expect Maya as well, but I’m less familiar with it) can be invoked from the command line, with options to say what you want it to do, but that’s only useful for batch-type tasks like final-quality renders of complex scenes and movies. Everything you can do in the CLI you can do in the GUI, but most of what you can do in the GUI cannot be done from the CLI.
Gimp has a scripting language and Imagemagick is entirely scripted.
I agree that some tasks—esp. selecting an image part—are (currently) most easily done via pointing—because image recognition isn’t far enough yet.
Many CAD systems have a command language.
Specialized ‘graphical’ applications like circuit layout used to be done by hand but moved to specialized languages.
I’d guess that earlier or later you’d rather use speech to compose most parts of the image and use (force feedback) motion for specialized paiting actions and transformations.
I’d guess that earlier or later you’d rather use speech to compose most parts of the image and use (force feedback) motion for specialized paiting actions and transformations.
I’m rather baffled by how I would use speech to paint into a Photoshop window. Force feedback motion already exists for 2D painting—graphics tablets are standard equipment for artists.
There are things in 3D animation that can be usefully expressed as text, but the only examples I know of are scripted procedural animation, in which the possibility of textual expression arises from limitations imposed on the repertoire of available movement. The example I’m most familiar with is deaf sign language, and the HamNoSys notation in particular (because I’ve worked with it and written software to translate it into animation data).
I agree with the original point that text is an essential medium that is not going away, but I think that GUIs vs CLIs is not the issue. Each has uses not easily replicated by the other. CLIs are more scalable, but GUIs provide memory cues and physical interaction. The main reason is just that words, spoken or written, is what people use to communicate with each other, whether via a computer or not. And only the written word is easily accessible for re-use.
I’m rather baffled by how I would use speech to paint into a Photoshop window. Force feedback motion already exists for 2D painting—graphics tablets are standard equipment for artists.
You wouldn’t “paint into a Photoshop window”. I’d imagine saying e.g. “put a circular animation of growing fern around the center of the pulsating ball” and then tweaking via force feedback some of the parameters of the fern or its growing.
It depends what software you’re talking about. Here are three examples: Photoshop (2D raster image processing), Blender (3D modelling and animation), or Maya (ditto). As far as I know, none of these have command-line interfaces.[1] How would you use a command-line interface to paint a picture, or model a 3D character?
I could add Illustrator (2D object-oriented image processing) and COMSOL (finite element engineering calculations) to that list as well. GUI and API, but no CLI beyond the needs of batch processing.
[1] This needs some amplification. All of them have programming interfaces, but that is something different. Blender (and I expect Maya as well, but I’m less familiar with it) can be invoked from the command line, with options to say what you want it to do, but that’s only useful for batch-type tasks like final-quality renders of complex scenes and movies. Everything you can do in the CLI you can do in the GUI, but most of what you can do in the GUI cannot be done from the CLI.
Gimp has a scripting language and Imagemagick is entirely scripted.
I agree that some tasks—esp. selecting an image part—are (currently) most easily done via pointing—because image recognition isn’t far enough yet.
Many CAD systems have a command language.
Specialized ‘graphical’ applications like circuit layout used to be done by hand but moved to specialized languages.
I’d guess that earlier or later you’d rather use speech to compose most parts of the image and use (force feedback) motion for specialized paiting actions and transformations.
I’m rather baffled by how I would use speech to paint into a Photoshop window. Force feedback motion already exists for 2D painting—graphics tablets are standard equipment for artists.
There are things in 3D animation that can be usefully expressed as text, but the only examples I know of are scripted procedural animation, in which the possibility of textual expression arises from limitations imposed on the repertoire of available movement. The example I’m most familiar with is deaf sign language, and the HamNoSys notation in particular (because I’ve worked with it and written software to translate it into animation data).
I agree with the original point that text is an essential medium that is not going away, but I think that GUIs vs CLIs is not the issue. Each has uses not easily replicated by the other. CLIs are more scalable, but GUIs provide memory cues and physical interaction. The main reason is just that words, spoken or written, is what people use to communicate with each other, whether via a computer or not. And only the written word is easily accessible for re-use.
You wouldn’t “paint into a Photoshop window”. I’d imagine saying e.g. “put a circular animation of growing fern around the center of the pulsating ball” and then tweaking via force feedback some of the parameters of the fern or its growing.