It’s the holidays, which means it’s also “teach technology to your elderly relatives” season. Most of my elderly relatives are pretty smart, and were technically advanced in their day. Some were engineers or coders back when that was rare. When I was a kid they were often early adopters of tech. Nonetheless, they are now noticeably worse at technology than my friends’ 3 year old. That kid figured out how to take selfie videos on my phone after watching me do it once, and I wasn’t even deliberately demonstrating.
Meanwhile, my aunt (who was the first girl in her high school to be allowed into technical classes) got confused when attempting to use an HBOMax account I’d mostly already configured for her (I think she got confused by the new profile taste poll but I wasn’t there so I’ll never be sure). She pays a huge fee to use Go Go Grandparent instead of getting a smartphone and using Uber directly. I got excited when an uncle seemed to understand YouTube, until it was revealed that he didn’t know about channels and viewed the subscribe button as a probable trap. And of course, there was my time teaching my PhD statistician father how to use Google Sheets, which required learning a bunch of prerequisite skills he’d never needed before and I wouldn’t have had the patience to teach if it hadn’t benefited me directly.
[A friend at a party claimed Apple did a poll on this and found the subscribe button to be a common area of confusion for boomers, to the point they were thinking of changing the “subscribe” button to “follow”. And honestly, given how coy substack is around what exactly I’m subscribing to and how much it costs, this isn’t unreasonable.]
The problem isn’t that my relatives were never competent with technology, because some of them very much were at one point. I don’t think it’s a general loss of intelligence either, because they’re still very smart in other ways. Also they all seem to have kept up with shopping websites just fine. But actions I view as atomic clearly aren’t for them.
Meanwhile, I’m aging out of being the cool young demographic marketers crave. New apps appeal to me less and less often. Sometimes something does look fun, like video editing, but the learning curve is so steep and I don’t need to make an Eye of The Tiger style training montage of my friends’ baby learning to buckle his car seat that badly, so I pass it by and focus on the millions of things I want to do that don’t require learning a new technical skill.
Then I started complaining about YouTube voice, and could hear echoes of my dad in 2002 complaining about the fast cuts in the movie Chicago.
Bonus points: I watched this just now and found it painfully slow.
I have a hypothesis that I’m staring down the path my boomer relatives took. New technology kept not being worth it to them, so they never put in the work to learn it, and every time they fell a little further behind in the language of the internet – UI conventions, but also things like the interpersonal grammar of social media – which made the next new thing that much harder to learn. Eventually, learning new tech felt insurmountable to them no matter how big the potential payoff.
I have two lessons from this. One is that I should be more willing to put in the time to learn new tech on the margin than I currently am, even if the use case doesn’t justify the time. Continued exposure to new conventions is worth it. I have several Millennial friends who are on TikTok specifically to keep up with the youths; alas, this does not fit in with my current quest for Quiet.
I’ve already made substantial concessions to the shift from text to voice, consuming many more podcasts and videos than I used to and even appearing on a few, but I think I need to get over my dislike of recordings of my own voice to the point I can listen to them. I made that toddler training montage video even though iMovies is a piece of shit and its UI should die in a fire.This was both an opportunity to learn new skills and manufactured a future inspiration when things are hard.
Second: there’s a YouTube channel called “Dad, How Do I?” that teaches basic householding skills like changing a tire, tying a tie, or making macaroni and cheese. We desperately need the equivalent for boomers, in a form that’s accessible to them (maybe a simplified app? Or even start with a static website). “Child, how do I…?” could cover watching individual videos on YouTube, the concept of channels, not ending every text message with “…”, Audible, etc. Things younger people take for granted. Advanced lessons could cover Bluetooth headphones and choosing your own electronics. I did some quick math and this is easily a $500,000/year business.
[To answer the obvious question: $500k/year is more than I make doing freelance research, but not enough more to cover the difference in impact and enjoyment. But if you love teaching or even just want to defray the cost of video equipment for your true passion, I think this is promising.]
My hope is that if we all work together to learn things, fewer people will be left stranded without access to technical tools, and also that YouTube voice will die out before it reaches something I care about.
Nice article. As a late-bloomer boomer (68 years old) I find myself frustrated with those within my age bracket (65-70) who resist the most basic skills needed to navigate the world as it is whether they like it or not. Ex: a brother-in-law that uses a flip phone and expects me to time picking him up at the airport because he is too cheap and stubborn to learn how to text on a reasonable cell phone. I would note that your observation of being marginalized by marketers is true for us as well: compare the ads on tv at different times of day::old people ads in the morning, ads for bored, jobless,staying home sick folks midday, and the young, exciting people in prime time.
I try to keep up with younger generations and their views of the world through things like this forum, some discord channels, and EA stuff. However, I try not to play the old guy card and personally, I find that having a sense of age appropriateness is worthwhile. Age is not, in my opinion just a number. Our body gets old and it begins to take more attention to deal with all that process. You throw signing into some streaming service just cause you want to relax for a minute, or having to go navigate passwords to get to banking, doctors, the library ( for crying out loud in a bucket, as my father used to say) it can make a person sometimes seem inept and grumpy.
For me (52 yrs old) it would actually be quite helpful to know what I should know/look into to keep up with current technologies. What is the current “internet canon” of tools, sites, and programs? And more general: How can I—at any given time—best find out which new things on the internet I should at least superficially learn in order not to be left behind?
It helps if you have kids. I have frequent discussions with my sons about why a certain new tech is worthwhile. And I challenge him to find solutions that I’d like to have but that don’t exist—or that I just don’t know about as it turns out in some cases. I discovered notion.com this way, many google suite features (he convinced me to use GChat), and the transcript panel of YouTube.
The subscribe button on Youtube is a trap. And I say this as somebody who knows exactly what a channel is, why channels exist, why that subscribe button is there, and many of the reasons for fine little details in how they work; who could, if necessary, code Youtube from scratch, from the bare metal right up to all of the UI bloat; and who participated in building the technology base that Youtube relies on.
For that matter, spreadsheets are kind of a cognitive trap, and it’s not necessarily a good idea to invest a lot in learning to use them… let alone to invest time and effort in learning a cloud version.
Sure, I’ll bite, why is the youtube subscribe button a trap? I anticipate that we will agree about what the subscribe button is and what it does, which means that this is fundamentally going to be a disagreement about what the definition of a trap is. I’m not interested in litigating that, so mostly I am curious about any information you have about how subscribing works that you expect I don’t already know.
The subscribe button is there to take advantage of your cognitive and motivational structure and keep you “engaged” with YouTube. Having subscriptions gives you a “reason” to return to Youtube on a regular basis, and gives Youtube an excuse to send you “reminders” about content in your subscribed channels.
Your subscriptions may also help Youtube to feed you content that keeps you there once you show up, although Youtube has access to other, often more effective ways of doing that, and having to “honor” subscriptions may actually interefere with those, so I don’t think it really counts.
Anyway, the bottom line is that, if you are like most people, subscriptions will contribute to you spending more time on Youtube than you “should”, in the sense that your Youtube time will interfere with goals that you would, if asked, say were more important. The intent is to have “watching Youtube” be an activity in itself, rather than having Youtube be a tool that you use to get information relevant to some outside purpose.
The subscription system is also used to motivate people to give content to Youtube. Although some mega-channels make economic sense, the “gamification” of subscription numbers helps to motivate marginal creators to spend more time and effort than they can really afford.
Subscriptions may occasionally help to meet a “user goal” like learning or staying informed about a specific topic… but their design and usual effect is to advance the “Youtube goal” of keeping the user staring at, or possibly producing, Youtube content and advertising, more than the user otherwise would and regardless of the user’s own interests (in any sense of the word “interests”...).
Some people will say that the trap has to do with tracking your activities, but that’s basically not true. Subscriptions don’t track you any more than just visiting any major Web site will track you. It’s more about controlling your activities. Your subscriptions do help a little with analyzing you as an advertising target, but I don’t think that’s a really major purpose or effect.
I appreciate that you took the time to explain your position. I think this is indeed a difference in the definition of “trap”, so I’ll leave it here.
Why are spreadsheets a trap, and what do you use instead? (What do you mean by a ‘cloud version’, Google’s spreadsheets?)
Spreadsheets make it really easy to set up a simple “mathematical model”. It really doesn’t take more than about 5 minutes to learn enough about spreadsheets to get something useful going, and actually starting a spreadsheet has very low overhead, both in terms of what you have to do and of how much thought you have to put into it.
The problem with that is that it’s easy to start using them for everything, including things that are really too complicated to be safely done in a spreadsheet. It’s also possible to have something that started out as a reasonable spreadsheet application grow from that into an unreliable, unmaintainable monstosity.
If you view a spreadsheet as a program, it’s written in a “write only language”. It’s really hard to come into a big spreadsheet and understand how everything works, or know how to safely make a change, or meaningfully review it for correctness, or even apply revision control to it. There’s no global view; you have to interact with the whole thing one cell at a time. And you’re not exactly encouraged to give things meaningful names, either.
… but it’s SO EASY to start with a spreadsheet that people are often lulled into making one, and then adding more and more to it. When your spreadsheet reaches a certain complexity level, you may then find yourself investing time into learning more and more arcane features so you can extend what it does… which means that, for you, a spreadsheet will become even more the default tool the next time you want to do something. But it’ll still be a bad programming language.
You can end up with “spreadsheet experts” who use them for everything. If I had a nickel for every spreadsheet I’ve seen that should have been a database, for example...
It’s sort of like writing shell scripts; it’s trivial to write a script to automate a few commands you do all the time, but if you keep adding features, then a year later you have a monstrosity that you wish you’d written in a regular, maintainable language.
And by a “cloud version”, I mean Google Sheets (or Office 365, for that matter). The data are controlled entirely by the host; it may or may not be feasible to extract all of the information you put in, and it’s definitely not going to be trivial if your spreadsheet has any complexity. The program that does the calculations is controlled entirely by the host, and may be changed at any time, including in ways that alter the results. The feature set is controlled entirely by the host; features you rely on may be changed or completely removed at any moment. Not really attractive as a long-term investment.
On edit: what I use instead is usually a real programming language. I won’t say which ones I favor, because it would be impolite to start even more of a language war. :-)
I would love a web-based tool that allowed me to enter data in a spreadsheet-like way, present it in a spreadsheet-like way, but use code to bridge the two.
Subtracting out the “web-based” part as a first class requirement, while focusing on the bridge made of code as a “middle” from which to work “outwards” towards raw inputs and final results...
...I tend to do the first ~20 data entry actions as variable constants in my code that I tweak by hand, then switch to the CSV format for the next 10^2 to 10^5 data entry tasks that my data labelers work on, based on how I think it might work best (while giving them space for positive creativity).
A semi-common transitional pattern during the CSV stage involves using cloud spreadsheets (with multiple people logged in who can edit together and watch each other edit (which makes it sorta web-based, and also lets you use data labelers anywhere on the planet)) and ends with a copypasta out of the cloud and into a CSV that can be checked into git. Data entry… leads to crashes… which leads to validation code… which leads to automated tooling to correct common human errors <3
If the label team does more than ~10^4 data entry actions, and the team is still using CSV, then I feel guilty about having failed to upgrade a step in the full pipeline (including the human parts) whose path of desire calls out for an infrastructure upgrade if it is being used that much. If they get to 10^5 labeling actions with that system and those resources then upper management is confused somehow (maybe headcount maxxing instead of result maxxing?) and fixing that confusion is… complicated.
This CSV growth stage is not perfect, but it is highly re-usable during exploratory sketch work on blue water projects because most of the components can be accomplished with a variety of non-trivial tools.
If you know of something better for these growth stages, I’d love to hear about your workflows, my own standard methods are mostly self constructed.
There are tools that let you do that. There is a whole unit testing paradigm called fixtures for it. A prominent example is Fitnesse: http://fitnesse.org/FitNesse.UserGuide.WritingAcceptanceTests
I’m not sure I see how this resembles what I described?
Maybe I misunderstand what you have in mind? The idea is to
enter data in a spreadsheet,
that is interpreted as row-wise input to function in a program (typically a unit test), and
the result of the function is added back into additional columns in the spreadsheet.
The idea is that I can do all this from my browser, including writing the code.
That would be cool. I think it should be relatively easy to set up with replit (online IDE).
Sounds a bit like AlphaSheets (RIP).
Different programming languages are for different things.
‘I use this instead of spreadsheet’ - that’s a use case I haven’t heard a war over. (‘I use this note taking app’ - that I have read a lot of different sides on.)
Downvoted for being purely an argument from authority.
I believe the reason the author mentioned their credentials was not to establish themselves as an authority, but to indicate that it’s possible to see the subscribe button as a trap even if one is tech savvy and knows it has nothing to do with e.g. subscription billing. (In contrast to where the article implied people avoided the subscribe button due to not understanding it.)
This is a good example of a situation where I believe the principle of charity is being applied too strongly. The author’s claim was that it is a trap, not that it is possible to see it as a trap. The structure of that first paragraph is “Claim that it is a trap. Points about being an authority figure on the topic.” (FWIW I don’t mean any of this contentiously, just constructive criticism.)
In agreement: It is literally an argument from authority because there is no other proof given. Readers of the original comment are asked to assume the commenter is correct based on their authority and reputation.
Like pjeby, I think you missed his point. He was not arguing from authority, he was presenting himself as evidence that someone tech-savvy could still see it as a trap. His actual reason for believing it is a trap is in his reply to GWS.
I despise videos when text and photos would do—I’m far too often in a noisy (or shared quiet) space, and I read so much faster than people talk. I’m even more annoyed at videos that pad their runtime to hit ad minima or something—I can’t take a quick scroll to the end to see if it’s worthwhile, then go back and absorb what I need at my own pace.
I recognized that videos take less time from the creator, and pay better. So that’s the way of the world, but I don’t have to like it. I mention this mostly as an explanation that I know I’m in the “old man yells at cloud” phase of my life, and a reason that I’m OK with some aspects of it.
I think video has a potentially higher bandwidth of information that text. The downside is that it more difficult to skim esp for people who can speedread. I was very happy when my son pointed out the transcript panel in YouTube which partly solves that. I think there are quite some valuable features left in that solution space.
Transcripts and playback at 1.5-2.5 speed (depending content) definitely helps a lot, as does a ToC with timestamps. You’re right that it’s higher bandwidth (in terms of information per second of participation), but I think my objection is that not all of that information is equally valuable, and I often prefer lower-bandwidth more-heavily-curated information.
Hmm, I wonder if I can generalize this to “communication bandwidth is a cost, not a benefit”. Spending lots more attention-effort to get a small amount more useful information isn’t a tradeoff I’ll make most of the time.
This makes it generally a worse medium for a rational debate. Few people are willing to spend dozens of hours to become familiar with the arguments of their opponents. So instead the vlog debate will degenerate into “each side produces hours of convicing videos, everyone watches the videos of their side and throws the links to the opponents, but no one bothers watching the opponents’ videos”.
There’s also the explore-exploit tradeoff: the younger you are, the more you should explore and accumulate new knowledge; whereas the older you are, the more knowledge you’ve already accumulated. Insofar as you expect additional information to only have marginal value, you should mostly exploit your existing knowledge.
So from that perspective, I’d say what these older relatives need is not so much better instruction, but a genuinely excellent & strongly motivating reason to learn some specific new thing. For instance, is learning how to use Youtube really worth their time and energy, when they have a perfectly functional TV in their living room?
I feel that way about lots of technology which only seems like a shiny new thing without enduring value. (And lots of it is even profoundly negative, e.g. I would be much better off if so much of the Internet wasn’t so incredibly addictive.)
In contrast, I do consider a small subset of technology and related skills as total game-changers, e.g. I’m sooooo much faster at touch-typing than at hand-writing that it affects the ways I think and communicate. Similarly, I tried voice commands on smartphones a few years ago, and was just thoroughly unimpressed by the quality back then; but it’s very obvious that this tool will eventually become good enough (or has already?) that it will become another game-changer in my ability to take notes, and to think, when I’m not at my PC.
From the outside, it does sound admittedly hard to tell the difference between shiny vs. game-changing technology.
On a more personal level, however, that part is easier. For instance, our family’s WhatsApp chat group would be a powerful incentive for my older relatives to learn how to use smartphones and this app, if they weren’t already fluent with technology; and similarly, there was talk among my relatives of uploading photos of their babies (/ grandchildren) to a privately shared Google Drive, which is again the kind of thing that would strongly motivate the grandparents to learn about that technology if they didn’t already know it.
I am 34 years old and I sense a very similar progression as you do, where I have mastered “early Internet” ways of doing things and I am less and less inclined to adopt new trends. Your remark about learning to be comfortable with hearing one’s own voice on recordings is very interesting to me.
By the way, your video has a suffix of 40 seconds of black silence :-)
I always stop watching once the belt clicks and missed that. Thanks!
I think this was actually a pretty interesting example that is worth going into more detail about. (I was there at the time Elizabeth was learning iMovie, and personally thought of this as the key insight behind the post)
iMovie does a particular thing where it resizes things when you squeeze the timeline with your fingers on the trackpad. This is part of a general trend towards having screens respond in (what is attempting to be) an organic way. This makes tradeoffs against being predictable in some ways. (It always resizes the teeniest sliver of footage-time to be large enough that you can see a thumbnail of the clip, even if it’s only 1% as long as the other nearby clips)
And while my naive reaction is “this is bullshit”, I also see how the endgame for this evolution of UI-style is the Iron Man interface:
...which is probably going to depend on you having a bunch of familiarity with “finger sliding” UI, which may evolve over time.
I think there’s a shift. When I was learning tech, the goal was to build a model of what was going on under the hood so you could control it, on its terms. Modern tech is much more about guessing what you want and delivering it, which seems like it should be better and maybe eventually will be, but right now is frustrating. It’s similar to when I took physics-for-biologists despite having the math for physics-for-physics-majors. Most people must find for-biologists easier or they wouldn’t offer it, but it was obvious to me I would have gained more predictive power with less effort if I’d taken a more math-based class.
Reminds me of this:
https://www.unqualified-reservations.org/2009/07/wolfram-alpha-and-hubristic-user/
“Please subscribe. It’s free, and you can always change your mind later.”—a successful YouTube call-to-action, from someone who understands the confusion about “subscribe” when many new folk come in from newspaper subscriptions. (Emphasis added)
Aside from the natural (to the human) effects surrounding learning and motivation—in this particular domain, in the current era, I suspect there are important sub-questions revolving around the effects of the “constant rippling and trembling” of an implicit norm of any-time all-the-time often-predatory UI changes pushed from afar, with the primary motivational hook being the service owner’s. In fact, you specifically mention
which puts me in mind of any number of interesting new dialog boxes or other widgets with unpredictable consequences. Maybe that all dissolves under “the key is to learn the UI grammar and a rough consensus of what shouldn’t break things”, but the notion of a unified platform grammar also gets eroded by the fashion cycles (I wonder how this differs by sub-medium, in particular mobile vs desktop vs Web).
It wasn’t until was teaching my grandma to check her emails on a desktop that I realised quite how many pointless pop-ups there actually are. Once a “this software needs to update” pop-up is the difference between her getting to her emails or stopping in confusion you suddenly realise that the chances of a typical desktop computer letting you get as far as an email login screen without a popup is low.
Apparently this is extremely common and there is a scientific explanation for it. And as an additional data point, I experienced it myself.
This doesn’t explain why young people with a similar lack of experience, eg. the three year old mentioned in the post, have a vastly easier time learning new tech-related things.
3-year olds have an easier time learning anything than an adult (eg Languages)
3-year olds don’t have any well-formed “ruts” in thier neural pathways. New UI or workflows often cut across the existing ruts
Also, 3-year olds do not worry whether they might break something—their parents would fix it.
Older people know that things can go wrong in various ways, but they are not sure how exactly. New scams are being invented every day. If you spend most of your time playing with the technology, you have a good idea about what is dangerous and what is not. If you only use it once in a while, it’s a minefield.
For example, if you notice that you missed a phone call and you call the person back… it can cost you lots of money (if the person is a scammer, setting up a paid service, then automatically calling up thousands of people and hanging up, expecting some of them to call back). When you see the missed call, is this something you consider before calling back? Most old people do not have a sufficiently good model; they are aware that some seemingly innocent things are dangerous, but they do not know which ones exactly.
If you teach a 70 years old person how to use a smartphone, do you also explain to them all possible things that can go wrong? Heck, I am not sure I could even list all the dangers. I rely on being an active online reader, so when a new scam is invented, I will probably read about it before someone tries it on me; hopefully. But that old person is just thrown into a pool with sharks. Same if you teach someone browsing the web. Same if you teach someone shopping online. All the tech is full of scams, and if you get scammed, well it sucks to be you, you should have been more tech savvy.
(Recently, someone impersonated my 70-years old mother on Whatsapp and a few other online messengers. I don’t even know how it is possible to create a Whatsapp account using someone else’s phone number; when I try to create an account, it checks the number by sending me a verification SMS. But apparently it is possible to do somehow, because someone did exactly this; created accounts with my mother’s phone number, and some young woman’s photo; then used them to sell some cars. We have no idea how; my mother only uses her smartphone for calling, and sending/receiving SMS. We just reported the whole thing to the police, and my mother changed her phone number.)
The tech is hostile, but if you keep using it every day, you get used to it, and learn to navigate it. You recognize the most frequent scams, and you get lucky that the more rare ones avoided you.
I suspect that in your WhatsApp case, someone spoofed her phone number so that they received the verification SMS instead. SMS verification is recently considered an unsafe method, which is why there’s been a move towards two factor authentication apps.
I’m not confident though, which only proves your point! I’m a professional software developer who reads about things like this all the time and I only have guesses at what went wrong.
One way this happens is to use the social graph: One of your relatives or friends writes you: “I made a mistake and now it sent a verification code to your phone, can you please give me the verification code?”
When your 70-year old mother gets such a message from another 70-year old friend she wants to help her friend and thus passes the verification code along. That verification code can then be used to overtake the account and attack further targets.
If each old person has >10 similar contacts you only need 10% to fall for this to overtake more and more accounts.
Thanks, I learned yet another way to scam people. But no such thing happened. My mother understands the concept of SMS, she says this did not happen, and she keeps the old messages on her phone, I checked them. Someone simply made a Whatsapp account with her phone number without her receiving any SMS message. I have no idea how that is possible—but that is exactly my point. (And, as usual, Whatsapp does not have any customer service that we could contact and ask.)
She already changed her number, so unless the same thing happens again, we consider this problem solved. It was just an illustration how difficult to understand things are (even for an IT guy such as me).
Alternative explanation: Your mother did participate in the scam in some way and is too embarrassed to admit it. (You know your mother better than I do. I’m just saying this might have happened and you might not have considered it.)
Re: 1, I recall this being in dispute or at least oversimplified. If you put children and adults on equal footing (in particular, by giving adults the same amount of time to learn languages that children do—potentially several hours per day!), I would be astonished if children came out ahead.
(Notwithstanding some minor aspects of language which children indeed seem advantaged in learning, like speaking a language without an accent.)
The OP was hypothesizing that a lack of keeping up with tech trends leads to you “falling behind” and eventually reaching a point where it feeling insurmountable to learn new tech things. It is possible that this hypothesis is true, and that young people have such a huge advantage in learning new things that this advantage outweighs their similar lack of background knowledge.
I don’t get that sense though. There are some places where 40 year olds have an advantage over 5 year olds in learning new things. There are other places where 5 year olds have the advantage. Then there’s the question of how wide the gap is between 5 year olds and 40 year olds. Language comes to mind as a place where the gap would be massive, but new tech doesn’t feel like it should have a massive gap. My epistemic status is just musing though.
Worth noting this was an extremely brilliant and online three year old who had a bunch of experience with multiple devices. She might not have seen my particular phone before, but I expect she had a good grounding in UI grammar.
I had my first experience with Tic Tok recently. Someone was showing me some funny videos. It wasn’t until the third or fourth video that I finally realised. “Oh, these are dialogues, and each time their is a jump cut to the same person talking in the same voice but with a different backdrop that symbolises a change in speaker”. The person showing me the videos could not believe this had not been obvious the first time.
Wait, didn’t this post just make a case that older people don’t keep up with new technology because they don’t feel they need it?
Doesn’t sound to me like you desperately need that app :)
That is true but actually it is a main part of the problem. You don’t need many new apps, but by not using them you can cumulatively loose (or not gain) crucial competencies you do need later on.