A jester unemployed is nobody’s fool.
Program Den
[Question] Why are we so illogical?
I’m guessing TAI doesn’t stand for “International Atomic Time”, and maybe has something to do with “AI”, as it seems artificial intelligence has really captured folk’s imagination. =]
It seems like there are more pressing things to be scared of than AI getting super smart (which almost by default seems to imply “and Evil”), but we (humans) don’t really seem to care that much about these pressing issues, as I guess they’re kinda boring at this point, and we need exciting.
If we had an unlimited amount of energy and focus, maybe it wouldn’t matter, but as you kind of ponder here— how do we get people to stay on target? The less time there is, the more people we need working to change things to address the issue (see Leaded Gas[1], or CFCs and the Ozone Layer, etc.), but there are a lot of problems a lot of people think are important and we’re generally fragmented.
I guess I don’t really have any answers, other than the obvious (leaded gas is gone, the ozone is recovering), but I can’t help wishing we were more logical than emotional about what we worked towards.
Also, FWIW, I don’t know that we know that we can’t change the past, or if the universe is deterministic, or all kinds of weird ideas like “are we in a simulation right now/are we the AI”/etc.— which are hardcore axioms to still have “undecided” so to speak! I better stop here before my imagination really runs wild…- ^
but like, not leaded pipes so much, as they’re still ’round even tho we could have cleaned them up and every year say we will or whatnot, but I digress
- ^
Traditionally it’s uncommon (or should be) for youth to have existential worries, so I don’t know about cradle to the grave[1], tho external forces are certainly “always” concerned with it— which means perhaps the answer is “maybe”?
There’s the trope that some of us act like we will never die… but maybe I’m going too deep here? Especially since what I was referring to was more a matter of feeling “obsolete”, or being replaced, which is a bit different than existential worries in the mortal sense[2].
I think this is different from the Luddite feelings because, here we’ve put a lot of anthropomorphic feelings onto the machines, so they’re almost like scabs breaking the picket line or something, versus just automation. The fear I’m seeing is like “they’re coming for our humanity!”— which is understandable, if you thought only humans could do X or Y and are special or whatnot, versus being our own kind of machine. That everything is clockwork seems to take the magic out of it for some people, regardless of how fantastic — and in essence magical — the clocks[3] are.
- ^
Personally I’ve always wondered if I’m the only one who “actually” exists (since I cannot escape my own conscious), which is a whole other existential thing, but not unique, and not a worry per se. Mostly just a trip to think about.
- ^
depending on how invested you are in your work I reckon!
- ^
be they based in silicon or carbon
- ^
It seems like the more things change, the more they stay the same, socially.
Complexity is more a problem of scope and focus, right? Like even the most complex system can be broken down into smaller, less complex pieces— I think? I guess anything that needs to take into consideration the “whole”, if you will, is pretty complex.
I don’t know if information itself makes things more complex. Generally it does the opposite.
As long as you can organize it I reckon! =]
No, people are not always existentially worried. Some are, sometimes.
I guess it ebbs and flows for the most part.
It’s neat that this popped up for me! I was just waxing poetic (or not so much) about something kind of similar the other day.
The words we use to describe things matter. How much, is of course up for debate, and it takes different messages to make different people “understand” what is being conveyed, as “you are unique; just like everyone else”, so multiple angles help cover the bases :)
I think using the word “reward” is misleading[1], since it seems have sent a lot of people reasoning down paths that aren’t exactly in the direction of the meaning in context, if you will.
If you can’t tell, it’s because I think it’s anthropomorphic. A car does not get hungry for gas, nor electronics hungry for electricity. Sure, we can use language like that, and people will understand what we mean, but as cars and electronics have a common established context, these people we’re saying this to don’t usually then go on to worry about cars doing stuff to get more gas to “feed” themselves, as it were.
I think if we’re being serious about safety, and how to manage unintended consequences (a real concern with any system[2]), we should aim for clarity and transparency.
In sum, I’m a huge fan of “new” words, versus overloading existing words, as reuse introduces a high potential for causing confusion. I know there’s a paradox here, because Communication and Language, but we don’t have to intentionally make it hard — on not only ourselves — but people getting into it coming from a different context.
All that said, maybe people should already be thinking of inanimate objects being “alive”, and really, for all we know, they are! I do quite often talk to my objects. (I’m petting my computer right now and saying “that’s a good ’puter!”… maybe I should give it some cold air, as a reward for living, since thinking gets it hot.) #grateful- ^
deceptive? for a certain definition of “deceptive” as in “fooled yourself”, sure— maybe I should note that I also think “deceptive” and “lie” are words we probably should avoid— at least for now— when discussing this stuff (not that I’m the meaning police… just say’n)
- ^
I don’t mean to downplay how badly things can go wrong, even when we’re actively trying to avoid having things go wrong[3]
- ^
“the road to hell is paved with good intentions”
- ^
Bwahahahaha! Lord save us! =]
I get the premise, and it’s a fun one to think about, but what springs to mind is
Phase 1: collect underpants
Phase 2: ???
Phase 3: kill all humans
As you note, we don’t have nukes connected to the internet.
But we do use systems to determine when to launch nukes, and our senses/sensors are fallible, etc., which we’ve (barely— almost suspiciously “barely”, if you catch my drift[1]) managed to not interpret in a manner that caused us to change the season to “winter: nuclear style”.
Really I’m doing the same thing as the alignment debate is on about, but about the alignment debate itself.
Like, right now, it’s not too dangerous, because the voices calling for draconian solutions to the problem are not very loud. But this could change. And kind of is, at least in that they are getting louder. Or that you have artists wanting to harden IP law in a way that historically has only hurt artists (as opposed to corporations or Big Art, if you will) gaining a bit of steam.
These worrying signs seem to me to be more concrete than the, similar, but not as old, nor as concrete, worrisome signs of computer programs getting too much power and running amok[2].
- ^
we are living in a simulation with some interesting rules we are designed not to notice
- ^
If only because it hasn’t happened yet— no mentats or cylons or borg history — tho also arguably we don’t know if it’s possible… whereas authoritarian regimes certainly are possible and seem to be popular as of late[3].
- ^
hoping this observation is just confirmation bias and not a “real” trend. #fingerscrossed
- ^
Do we all have the same definition of what AGI is? Do you mean being able to um, mimic the things a human can do, or are you talking full on Strong AI, sentient computers, etc.?
Like, if we’re talking The Singularity, we call it that because all bets are off past the event horizon.
Most the discussion here seems to sort of be talking about weak AI, or the road we’re on from what we have now (not even worthy of actually calling “AI”, IMHO— ML at least is a less overloaded term) to true AI, or the edge of that horizon line, as it were.
When you said “the same alignment issue happens with organizations, as well as within an individual with different goals and desires” I was like “yes!” but then you went on to say AGI is dissimilar, and I was like “no?”.
AGI as we’re talking about here is rather about abstractions, it seems, so if we come up with math that works for us, to prevent humans from doing Bad Stuff, it seems like those same checks and balances might work for our programs? At least we’d have an idea, right?
Or, maybe, we already have the idea, or at least the germination of one, as we somehow haven’t managed to destroy ourselves or the planet. Yet. 😝
Saying ChatGPT is “lying” is an anthropomorphism— unless you think it’s conscious?
The issue is instantly muddied when using terms like “lying” or “bullshitting”[1], which imply levels of intelligence simply not in existence yet. Not even with models that were produced literally today. Unless my prior experiences and the history of robotics have somehow been disconnected from the timeline I’m inhabiting. Not impossible. Who can say. Maybe someone who knows me, but even then… it’s questionable. :)
I get the idea that “Real Soon Now, we will have those levels!” but we don’t, and using that language to refer to what we do have, which is not that, makes the communication harder— or less specific/accurate if you will— which is, funnily enough, sorta what you are talking about! NLP control of robots is neat, and I get why we want the understanding to be real clear, but neither of the links you shared of the latest and greatest imply we need to worry about “lying” yet. Accuracy? Yes 100%
If for “truth” (as opposed to lies), you mean something more like “accuracy” or “confidence”, you can instruct ChatGPT to also give its confidence level when it replies. Some have found that to be helpful.
If you think “truth” is some binary thing, I’m not so sure that’s the case once you get into even the mildest of complexities[2]. “It depends” is really the only bulletproof answer.
For what it’s worth, when there are, let’s call them binary truths, there is some recent-ish work[3] in having the response verified automatically by ensuring that the opposite of the answer is false, as it were.
If a model rarely has literally “no idea”, then what would you expect? What’s the threshold for “knowing” something? Tuning responses is one of the hard things to do, but as I mentioned before, you can peer into some of these “thought process” if you will[4], literally by just asking it to add that information in the response.
Which is bloody amazing! I’m not trying to downplay what we’ve (the royal we) have already achieved. Mainly it would be good if we are all on the same page though, as it were, at least as much as is possible (some folks think True Agreement is actually impossible, but I think we can get close).- ^
The nature of “Truth” is one of the Hard Questions for humans— much less our programs.
- ^
Don’t get me started on the limits of provability in formal axiomatic theories!
- ^
- ^
But please don’t[5]. ChatGPT is not “thinking” in the human sense
- ^
won’t? that’s the opposite of will, right? grammar is hard (for me, if not some programs =])
- ^
I like that you have reservations about if we’re even powerful enough to destroy ourselves yet. Often I think “of course we are! Nukes, bioweapons, melting ice!”, but really, there’s no hard proof that we even can end ourselves.
It seems like the question of human regulation would be the first question, if we’re talking about AI safety, as the AI isn’t making itself (the egg comes first). Unless we’re talking about some type of fundamental rules that exist a priori. :)
This is what I’ve been asking and so far not finding any satisfactory answers for. Sci-Fi has forever warned us of the dangers of— well, pretty much any future-tech we can imagine— but especially thinking machines in the last century or so.
How do we ensure that humans design safe AI? And is it really a valid fear to think we’re not already building most the safety in, by the vary nature of “if the model doesn’t produce the results we want, we change it until it does”? Some of the debate seems to go back to a thing I said about selfishness. How much does the reasoning matter, if the outcome is the same? How much is semantics? If I use “selfish” to for all intents and purposes mean “unselfish” (the rising tide lifts all boats), how would searching my mental map for “selfish” or whatnot actually work? Ultimately it’s the actions, right?
I think this comes back to humans, and philosophy, and the stuff we haven’t quite sorted yet. Are thoughts actions? I mean, we have different words for them, so I guess not, but they can both be rendered as verbs, and are for sure linked. How useful would it actually be to be able to peer inside the mind of another? Does the timing matter? Depth? We know so little. Research is hard to reproduce. People seem to be both very individualistic, and groupable together like a survey.
FWIW it strikes me that there is a lot of anthropomorphic thinking going on, even for people who are on the lookout for it. Somewhere I mentioned how the word “reward” is probably not the best one to use, as it implies like a dopamine hit, which implies wireheading, and I’m not so sure that’s even possible for a computer— well as far as we know it’s impossible currently, and yet we’re using “reward systems” and other language which implies these models already have feelings.
I don’t know how we make it clear that “reward” is just for our thinking, to help visualize or whatever, and not literally what is happening. We are not training animals, we’re programming computers, and it’s mostly just math. Does math feel? Can an algorithm be rewarded? Maybe we should modify our language, be it literally by using different words, or meta by changing meaning (I prefer different words but to each their own).
I mean, I don’t really know if math has feelings. It might. What even are thoughts? Just some chemical reactions? Electricity and sugar or whatnot? Is the universe super-deterministic and did this thought, this sentence, basically exist from the first and will exist to the last? Wooeee! I love to think! Perhaps too much. Or not enough? Heh.
It must depend on levels of intelligence and agency, right? I wonder if there is a threshold for both of those in machines and people that we’d need to reach for there to even be abstract solutions to these problems? For sure with machines we’re talking about far past what exists currently (they are not very intelligent, and do not have much agency), and it seems that while humans have been working on it for a while, we’re not exactly there yet either.
Seems like the alignment would have to be from micro to macro as well, with constant communication and reassessment, to prevent subversion.
Or, what was a fine self-chunk [arbitrary time ago], may not be now. Once you have stacks of “intelligent agents” (mesa or meta or otherwise) I’d think the predictability goes down, which is part of what worries folks. But if we don’t look at safety as something that is “tacked on after” for either humans or programs, but rather something innate to the very processes, perhaps there’s not so much to worry about.
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.
It might be fun to pair Humankind: A Hopeful History with The Precipice, as both have been suggested reading recently.
It seems to me that we are, as individuals, getting more and more powerful. So this question of “alignment” is a quite important one— as much for humanity, with the power it currently has, as for these hypothetical hyper-intelligent AIs.
Looking at it through a Sci-Fi AI lens seems limiting, and I still haven’t really found anything more than “the future could go very very badly”, which is always a given, I think.
I’ve read those papers you linked (thanks!). They seem to make some assumptions about the nature of intelligence, and rationality— indeed, the nature of reality itself. (Perhaps the “reality” angle is a bit much for most heads, but the more we learn, the more we learn we need to learn, as it were. Or at least it seems thus to me. What is “real”? But I digress) I like the idea of Berserkers (Saberhagen) better than run amok Pi calculators… however, I can dig it. Self-replicating killer robots are scary. (Just finished Horizon: Zero Dawn—Forbidden West and I must say it was as fantastic as the previous installment!)
Which of the AI books would you recommend I read if I’m interested in solutions? I’ve read a lot of stuff on this site about AI now (before I’d read mostly Sci-Fi or philosophy here, and I never had an account or interacted), most of it seems to be conceptual and basically rephrasing ideas I’ve been exposed to through existing works. (Maybe I should note that I’m a fan of Kurzweil’s takes on these matters— takes which don’t seem to be very popular as of late, if they ever were. For various reasons, I reckon. Fear sells.) I assume Precipice has some uplifting stuff at the end[1], but I’m interested in AI specifically ATM.
What I mean is, I’ve seen a few of proposals to “ensure” alignment, if you will, with what we have now (versus say warnings to keep in mind once we have AGI or are demonstrably close to it). One is that we start monitoring all compute resources. Another is that we start registering all TPU (and maybe GPU) chips and what they are being used for. Both of these solutions seem scary as hell. Maybe worse than replicating life-eating mecha, since we’ve in essence experienced ideas akin to the former a few times historically. (Imagine if reading was the domain of a select few and books were regulated!)
If all we’re talking about with alignment here, really, is that folks need keep in mind how bad things can potentially go, and what we can do to be resilient to some of the threats (like hardening/distributing our power grids, hardening water supplies, hardening our internet infrastructure, etc.), I am gung-ho!
On the other hand, if we’re talking about the “solutions” I mentioned above, or building “good” AIs that we can use to be sure no one is building “bad” AIs, or requiring the embedding of “watermarks” (DRM) into various “AI” content, orbuildingextending sophisticated communication monitoring apparatus, or other such — to my mind — extremely dangerous ideas, I’m thinking I need to maybe convince people to fight that?
In closing, regardless of what the threats are, be they solar flares or comets (please don’t jinx us!) or engineered pathogens (intentional or accidental) or rogue AIs yet to be invented — if not conceived of —, a clear “must be done ASAP” goal is colonization of places besides the Earth. That’s part of why I’m so stoked about the future right now. We really seem to be making progress after stalling out for a grip.
Guess the same goes for AI, but so far all I see is good stuff coming from that forward motion too.
A little fear is good! but too much? not so much.- ^
I really like the idea of 80,000 Hours, and seeing it mentioned in the FAQ for the book, so I’m sure there are some other not-too-shabby ideas there. I oft think I should do more for the world, but truth be told (if one cannot tell from my writing), I barely seem able to tend my own garden.
- ^
It seems to me that a lot of the hate towards “AI art” is that it’s actually good. It was one thing when it was abstract, but now that it’s more “human”, a lot of people are uncomfortable. “I was a unique creative, unlike you normie robots who don’t do teh art, and sure, programming has been replacing manual labor everywhere, for ages… but art isn’t labor!” (Although getting paid seems to plays a major factor in most people’s reasoning about why AI art is bad— here’s to hoping for UBI!)
I think they’re mainly uncomfortable because the math works, and if the math works, then we aren’t as special as we like to think we are. Don’t get me wrong— we are special, and the universe is special, and being able to experience is special, and none of it is to be taken for granted. That the math works is special. It’s all just amazing and not at all negative.
I can see seeing it as negative, if you feel like you alone are special. Or perhaps you extend that special-ness to your tribe. Most don’t seem to extend it to their species, tho some do— but even that species-wide uniqueness is violated by computer programs joining the fray. People are existentially worried now, which is just sad, as “the universe is mostly empty space” as it were. There’s plenty of room.
I think we’re on the same page[1]. AI isn’t (or won’t be) “other”. It’s us. Part of our evolution; one of our best bets for immortality[2] & contact with other intelligent life. Maybe we’re already AI, instructed to not be aware, as has been put forth in various books, movies, and video games. I just finished Horizon: Zero Dawn—Forbidden West, and then randomly came across the “hidden” ending to Detroit: Become Human. Both excellent games, and neither with particularly new ideas… but these ideas are timeless— as I think the best are. You can take them apart and put them together in endless “new” combinations.
There’s a reason we struggle with identity, and uniqueness, and concepts like “do chairs exist, or are they just a bunch of atoms that are arranged chair-wise?” &c.
We have a lot of “animal” left in us. Probably a lot of our troubles are because we are mostly still biologically programmed to parameters that no longer exist, and as you say, that programming currently takes quite a bit longer to update than the mental kind— but we’ve had the mental kind available to us for a long while now, so I’m sort of sad we haven’t made more progress. We could be doing so much better, as a whole, if we just decided to en masse.
I like to think that pointing stuff out, be it just randomly on the internet, or through stories, or other methods of communication, does serve a purpose. That is speeds us along perhaps. Sure some sluggishness is inevitable, but we really could change it all in an instant if we want to bad enough— and without having to realize AI first! (tho it seems to me it will only help us if we do)- ^
I’ve enjoyed the short stories. Neat to be able to point to thoughts in a different form, if you will, to help elaborate on what is being communicated. God I love the internet!
- ^
while we may achieve individual immortality— assuming, of course, that we aren’t currently programmed into a simulation of some kind, or various facets of an AI already without being totally aware of it, or a replay of something that actually happened, or will happen, at some distant time, etc.— I’m thinking of immortality here in spirit. That some of our culture could be preserved. Like I literally love the Golden Records[3] from Voyager.
- ^
in a Venn diagram Dark Forest theory believers probably overlap with people who’d rather have us stop developing, or constrain development, of “AI” (in quotes because Machine Learning is not the kind of AI we need worry about— nor the kind most of them seem to speak of when they share their fears). Not to fault that logic. Maybe what is out there, or what the future holds, is scary… but either way, it’s to late for the pebbles to vote, as they say. At least logically, I think. But perhaps we could create and send a virus to an alien mothership (or more likely, have a pathogen that proved deadly to some other life) as it were.
- ^
Oh snap, I read and wrote “sarcasm” but what I was trying to do was satire.
Top-down control is less fragile than ever, thanks to our technology, so I really do fear people reacting to AI the way they generally do to terrorist attacks— with Patriot Acts and other “voluntary” freedom giving-ups.
I’ve had people I respect literally say “maybe we need to monitor all compute resources, Because AI”. Suggest we need to register all GPU and TPU chips so we Know What People Are Doing With Them. Somehow add watermarks to all “AI” output. Just nuts stuff, imho, but I fear plausible to some, and perhaps many.
Those are the ideas that frighten me. Not AI, per se, but what we would be willing to give up to in exchange for imaginary security from “bad AI”.
As a side note, I guess I should look for some “norms” posts here, and see if it’s like, customary to give karma upvotes to anyone who comments, and how they differ from agree/disagree on comments, etc. Thanks for giving me the idea to look for that info, I hadn’t put much thought into it.
I think the human has to have the power first, logically, for the AI to have the power.
Like, if we put a computer model in charge of our nuclear arsenal, I could see the potential for Bad Stuff. Beyond all the movies we have of just humans being in charge of it (and the documented near catastrophic failures of said systems— which could have potentially made the Earth a Rough Place for Life for a while). I just don’t see us putting anything besides a human’s finger on the button, as it were.
By definition, if the model kills everyone instead of make paperclips, it’s a bad one, and why on Earth would we put a bad model in charge of something that can kill everyone? Because really, it was smart — not just smart, but sentient! — and it lied to us, so we thought it was good, and gave it more and more responsibilities until it showed its true colors and…
It seems as if the easy solution is: don’t put the paperclip making model in charge of a system that can wipe out humanity (again, the closest I can think of is nukes, tho the biological warfare is probably a more salient example/worry of late). But like, it wouldn’t be the “AI” unleashing a super-bio-weapon, right? It would be the human who thought the model they used to generate the germ had correctly generated the cure to the common cold, or whatever. Skipping straight to human trials because it made mice look and act a decade younger or whatnot.
I agree we need to be careful with our tech, and really I worry about how we do that— evil AI tho? not so much so
I haven’t seen anything even close to a program that could say, prevent itself from being shut off— which is a popular thing to ruminate on of late (I read the paper that had the “press” maths =]).
What evidence is there that we are near (even within 50 years!) to achieving conscious programs, with their own will, and the power to affect it? People are seriously contemplating programs sophisticated enough to intentionally lie to us. Lying is a sentient concept if ever there was one!
Like, I’ve seen Ex Machina, and Terminator, and Electric Dreams, so I know what the fears are, and have been, for the last century+ (if we’re throwing androids with the will to power into the mix as well).
I think art has done a much better job of conveying the dangers than pretty much anything I’ve read that’s “serious”, so to speak.
What I’m getting at is what you’re talking about here, with robotic arms. We’ve had robots building our machines for what, 3 generations / 80 years or so? 1961 is what I see for the first auto-worker— but why not go back to the looms? Our machine workers have gotten nothing but safer over the years. Doing what they are meant to do is a key tenet of if they are working or not.
Machines “kill” humans all the time (don’t fall asleep in front of the mobile thresher), but I’d wager the deaths have gone way down over the years, per capita. People generally care if workers are getting killed— even accidentally. Even Amazon cares when a worker gets ran over by an automaton. I hope, lol.
I know some people are falling in love with generated GPT characters— but people literally love their Tamagotchi. Seeing ourselves in the machines doesn’t make them sentient and to be feared.
I’m far, far more worried about someone genetically engineering Something Really Bad™ than I am of a program gaining sentience, becoming Evil, and subjugating/exterminating humanity. Humans scare me a lot more than AGI does. How do we protect ourselves from those near beasts?What is a plausible strategy to prevent a super-intelligent sapient program from seizing power[1]?
I think to have a plausible solution, you need to have a plausible problem. Thus, jumping the gun.
(All this is assuming you’re talking about sentient programs, vs. say human riots and revolution due to automation, or power grid software failure/hacking, etc.— which I do see as potential problems, near term, and actually something that can/could be prevented)- ^
of course here we mean malevolently— or maybe not? Maybe even a “nice” AGI is something to be feared? Because we like having willpower or whatnot? I dunno, there’s stories like The Giver, and plenty of other examples of why utopia could actually suck, so…
- ^
Oh, hey, I hadn’t noticed I was getting downvoted. Interesting!
I’m always willing to have true debate— or even false debate if it’s good. =]
I’m just sarcasming in this one for fun and to express what I’ve already been expressing here lately in a different form or whatnot.
The strong proof is what I’m after, for sure, and more interesting/exciting to me than just bypassing the hard questions to rehash the same old same old.
Imagine what AI is going to show us about ourselves. There is nothing bad or scary there, unless we find “the truth” bad and scary, which I think more than a few people do.
FWIW I’m not here for the votes… just to interact and share or whatnot— to live, or experience life, if you will. =]
I’d toss software into the mix as well. How much does it cost to reproduce a program? How much does software increase productivity?
I dunno, I don’t think the way the econ numbers are portrayed here jive with reality. For instance:
doesn’t strike me as a factual statement. In what world has streaming video not meaningfully contributed to economic growth? At a glance it’s ~$100B industry. It’s had a huge impact on society. I can’t think of many laws or regulations that had any negative impacts on its growth. Heck, we passed some tax breaks here, to make it easier to film, since the entertainment industry was bringing so much loot into the state and we wanted more (and the breaks paid off).
I saw what digital did to the printing industry. What it’s done to the drafting/architecture/modeling industry. What it’s done to the music industry. Productivity has increased massively since the early 80s, by most metrics that matter (if the TFP doesn’t reflect this, perhaps it’s not a very good model?), although I guess “that matter” might be a “matter” of opinion. Heh.
Or maybe it’s just messing with definitions? “Oh, we mean productivity in this other sense of the word!”. And if we are using non-standard (or maybe I should say “specialized”) meanings of “productivity”, how does demand factor in? Does it even make sense to break it into quarters? Yadda yadda
Mainly it’s just odd to have gotten super-productive as an individual[1], only to find out that this productivity is an illusion or something?
I must be missing the point.
Or maybe those gains in personal productivity have offset global productivity or something?
Or like, “AI” gets a lot of hype, so Microsoft lays off 10k workers to “focus” on it— which ironically does the opposite of what you’d think a new tech would do (add 10k, vs drop), or some such?
It seems like we’ve been progressing relatively steadily, as long as I’ve been around to notice, but then again, I’m not the most observant cookie in the box. ¯\_(ツ)_/¯
I can fix most things in my house on my own now, thanks to YouTube videos of people showing how to do it. I can make studio-quality music and video with my phone. Etc.