I’d guess that earlier or later you’d rather use speech to compose most parts of the image and use (force feedback) motion for specialized paiting actions and transformations.
I’m rather baffled by how I would use speech to paint into a Photoshop window. Force feedback motion already exists for 2D painting—graphics tablets are standard equipment for artists.
There are things in 3D animation that can be usefully expressed as text, but the only examples I know of are scripted procedural animation, in which the possibility of textual expression arises from limitations imposed on the repertoire of available movement. The example I’m most familiar with is deaf sign language, and the HamNoSys notation in particular (because I’ve worked with it and written software to translate it into animation data).
I agree with the original point that text is an essential medium that is not going away, but I think that GUIs vs CLIs is not the issue. Each has uses not easily replicated by the other. CLIs are more scalable, but GUIs provide memory cues and physical interaction. The main reason is just that words, spoken or written, is what people use to communicate with each other, whether via a computer or not. And only the written word is easily accessible for re-use.
I’m rather baffled by how I would use speech to paint into a Photoshop window. Force feedback motion already exists for 2D painting—graphics tablets are standard equipment for artists.
You wouldn’t “paint into a Photoshop window”. I’d imagine saying e.g. “put a circular animation of growing fern around the center of the pulsating ball” and then tweaking via force feedback some of the parameters of the fern or its growing.
I’m rather baffled by how I would use speech to paint into a Photoshop window. Force feedback motion already exists for 2D painting—graphics tablets are standard equipment for artists.
There are things in 3D animation that can be usefully expressed as text, but the only examples I know of are scripted procedural animation, in which the possibility of textual expression arises from limitations imposed on the repertoire of available movement. The example I’m most familiar with is deaf sign language, and the HamNoSys notation in particular (because I’ve worked with it and written software to translate it into animation data).
I agree with the original point that text is an essential medium that is not going away, but I think that GUIs vs CLIs is not the issue. Each has uses not easily replicated by the other. CLIs are more scalable, but GUIs provide memory cues and physical interaction. The main reason is just that words, spoken or written, is what people use to communicate with each other, whether via a computer or not. And only the written word is easily accessible for re-use.
You wouldn’t “paint into a Photoshop window”. I’d imagine saying e.g. “put a circular animation of growing fern around the center of the pulsating ball” and then tweaking via force feedback some of the parameters of the fern or its growing.