I made that toddler training montage video even though iMovies is a piece of shit and its UI should die in a fire.
I think this was actually a pretty interesting example that is worth going into more detail about. (I was there at the time Elizabeth was learning iMovie, and personally thought of this as the key insight behind the post)
iMovie does a particular thing where it resizes things when you squeeze the timeline with your fingers on the trackpad. This is part of a general trend towards having screens respond in (what is attempting to be) an organic way. This makes tradeoffs against being predictable in some ways. (It always resizes the teeniest sliver of footage-time to be large enough that you can see a thumbnail of the clip, even if it’s only 1% as long as the other nearby clips)
And while my naive reaction is “this is bullshit”, I also see how the endgame for this evolution of UI-style is the Iron Man interface:
...which is probably going to depend on you having a bunch of familiarity with “finger sliding” UI, which may evolve over time.
I think there’s a shift. When I was learning tech, the goal was to build a model of what was going on under the hood so you could control it, on its terms. Modern tech is much more about guessing what you want and delivering it, which seems like it should be better and maybe eventually will be, but right now is frustrating. It’s similar to when I took physics-for-biologists despite having the math for physics-for-physics-majors. Most people must find for-biologists easier or they wouldn’t offer it, but it was obvious to me I would have gained more predictive power with less effort if I’d taken a more math-based class.
Modern tech is much more about guessing what you want and delivering it, which seems like it should be better and maybe eventually will be, but right now is frustrating.
Reminds me of this:
[Wolfram Alpha] is not a full-text search engine. It is a database query and visualization tool. More precisely, it is a large (indeed, almost exhaustive) set of such tools. These things may seem similar, but they are as different as popes and partridges.
Google is not a control interface; WA is. When you use WA, you know which of these tools you wish to select. You know that when you type “two cups of flour and two eggs” (which now works) you are looking for a Nutrition Facts label. It is only Stephen Wolfram’s giant electronic brain which has to run ten million lines of code to figure this out. Inside your own brain, it is written on glowing letters across your forehead.
So the giant electronic brain is doing an enormous amount of work to discern information which the user knows and can enter easily: which tool she wants to use.
When the giant electronic brain succeeds in this task, it has saved the user from having to manually select and indicate her actual data-visualization application of choice. This has perhaps saved her some time. How much? Um, not very much.
When the giant electronic brain fails in this task, you type in Grandma’s fried-chicken recipe and get a beautiful 3-D animation of a bird-flu epidemic. (Or, more likely, “Wolfram Alpha wasn’t sure what to do with your input.” Thanks, Wolfram Alpha!) How do you get from this to your Nutrition Facts? Rearrange some words, try again, bang your head on the desk, give up. What we’re looking at here is a classic, old-school, big steaming lump of UI catastrophe.
And does the giant electronic brain fail? Gosh, apparently it does. After many years of research, WA is nowhere near achieving routine accuracy in guessing the tool you want to use from your unstructured natural-language input. No surprise. Not only is the Turing test kinda hard, even an actual human intelligence would have a tough time achieving reliability on this task.
The task of “guess the application I want to use” is actually not even in the domain of artificial intelligence. AI is normally defined by the human standard. To work properly as a control interface, Wolfram’s guessing algorithm actually requires divine intelligence. It is not sufficient for it to just think. It must actually read the user’s mind. God can do this, but software can’t.
Of course, the giant electronic brain is an algorithm, and algorithms can be remembered. For instance, you can be pretty sure that the example queries on the right side of your screen (“June 23, 1988”) will always send you to the same application. If you memorize these formats and avoid inappropriate variations, you may not end up in the atomic physics of the proton.
This is exactly what people do when circumstances force them to use this type of bad UI. They create an incomplete model of the giant electronic brain in their own, non-giant, non-electronic brains. Of course, since the giant electronic brain is a million lines of code which is constantly changing, this is a painful, inadequate and error-prone task. But if you are one of those people for whom one of Wolfram’s data-visualization tools is useful, you have no choice.
[...]
Thus, the “flexible” and “convenient” natural-language interface becomes one which even Technology Review, not exactly famous for its skepticism, describes as “inflexible.” The giant electronic brain has become a giant silicon portcullis, standing between you and your application of choice. You can visualize all sorts of queries with Wolfram Alpha—but first you have to trick, cajole, or otherwise hack a million lines of code into reading your mind.
I think this was actually a pretty interesting example that is worth going into more detail about. (I was there at the time Elizabeth was learning iMovie, and personally thought of this as the key insight behind the post)
iMovie does a particular thing where it resizes things when you squeeze the timeline with your fingers on the trackpad. This is part of a general trend towards having screens respond in (what is attempting to be) an organic way. This makes tradeoffs against being predictable in some ways. (It always resizes the teeniest sliver of footage-time to be large enough that you can see a thumbnail of the clip, even if it’s only 1% as long as the other nearby clips)
And while my naive reaction is “this is bullshit”, I also see how the endgame for this evolution of UI-style is the Iron Man interface:
...which is probably going to depend on you having a bunch of familiarity with “finger sliding” UI, which may evolve over time.
I think there’s a shift. When I was learning tech, the goal was to build a model of what was going on under the hood so you could control it, on its terms. Modern tech is much more about guessing what you want and delivering it, which seems like it should be better and maybe eventually will be, but right now is frustrating. It’s similar to when I took physics-for-biologists despite having the math for physics-for-physics-majors. Most people must find for-biologists easier or they wouldn’t offer it, but it was obvious to me I would have gained more predictive power with less effort if I’d taken a more math-based class.
Reminds me of this:
https://www.unqualified-reservations.org/2009/07/wolfram-alpha-and-hubristic-user/