How do we count specialized language? By this I mean stuff like technical or scientific specialties, which are chock-full of jargon. The more specialized they are, the less they share with related topics. I would expect we do a lot more jargon generating now than before, and jargon words are mostly stand-ins for entire paragraphs (or longer) of explanation.
Related to jargon: academic publishing styles. Among other things, academic writing style is notorious for being difficult for outsiders to penetrate, and making no accommodation for the reader at all (even the intended audience). I have the sense that papers in research journals have almost evolved in the opposite direction, all though I note my perception is based on examples of older papers with an excellent reputation, which is a strong survivorship bias. Yet those papers were usually the papers that launched new fields of inquiry; it seems to me they require stylistic differences like explaining intuitions because the information is not there otherwise.
Unrelated to the first two, it feels like we should circle back to the relationship between speaking and writing. How have sentences and wordcount fared when spoken? We have much less data for this because it requires recording devices, but I seem to recall this being important to settling the question of whether the Iliad could be a written-down version of oral tradition. The trick there was they recorded some bards in Macedonia in the early 20th century performing their stories, transcribed the recordings, and then found them to be of comparable length to Homer. Therefore, oral tradition was ruled in.
ryan_b
Good fiction might be hard, but that doesn’t much matter to selling books. This thing is clearly capable of writing endless variations on vampire romances, Forgotten Realms or Magic the Gathering books, Official Novelization of the Major Motion Picure X, etc.
Writing as an art will live. Writing as a career is over.
And all of this will happen far faster than it did in the past, so people won’t get a chance to adapt. If your job gets eliminated by AI, you won’t even have time to reskill for a new job before AI takes that one too.
I propose an alternative to speed as explanation: all previous forms of automation were local. Each factory had to be automated in bespoke fashion one at a time; a person could move from a factory that was automated to any other factory that had not been yet. The automation equipment had to be made some somewhere and then moved to where the automation was happening.
By contrast, AI is global. Every office on earth can be automated at the same time (relative to historical timescales). There’s no bottleneck chain where the automation has to be deployed to one locality, after being assembled in a different locality, from parts made in many different localities. The limitations are network bandwidth and available compute, both of which are shared resource pools and complements besides.
I like this effort, and I have a few suggestions:
Humanoid robots are much more difficult than non-humanoid ones. There are a lot more joints than in other designs; the balance question demands both more capable components and more advanced controls; as a consequence of the balance and shape questions, a lot of thought needs to go into wrangling weight ratios, which means preferring more expensive materials for lightness, etc.
In terms of modifying your analysis, I think this cashes out as greater material intensity—the calculations here are done by weight of materials, we just need a way to account for the humanoid robot requiring more processing on all of those materials. We could say something like 1500kg of humanoid robot materials take twice as much processing/refinement as 1500kg of car materials (occasionally this will be about the same; for small fractions of the weight it will be 10x the processing, etc).
The humanoid robots are more vulnerable to bottlenecks than cars. Specifically they need more compute and rare earth elements like neodymium, which will be tough because that supply chain is already strained by new datacenters and AI demands.
- Feb 11, 2025, 9:00 PM; 2 points) 's comment on How AI Takeover Might Happen in 2 Years by (
This is a fun idea! I was recently poking at field line reconnection myself, in conversation with Claude.
I don’t think the energy balance turns out in the idea’s favor. Here are the heuristics I considered:The first thing I note is what happens during reconnection: a bunch of the magnetic energy turns into kinetic and thermal energy. The part you plan to harvest is just the electric field part. Even in otherwise ideal circumstances, that’s a substantial loss.
The second thing I note is that in a fusion reactor, the magnetic field is already being generated by the device, via electromagnets. This makes the process look like putting current into a magnetic field, then to break the magnetic field in order to get less current back out (because of the first note).
The third thing I note is that reconnection is about the reconfiguration of the magnetic field lines. I’m highly confident that electric fields when the lines break define how the lines reconnect, so if you induct all the energy out the reconnection will look different than would have. Mostly this would cash out as a weaker magnetic field than it would be otherwise, driving more recharging of the magnetic field, making the balance worse.
All of that being said, Claude and ChatGPT both respond well to sanity checking. You can say directly something like: “Sanity check: is this consistent with thermodynamics?”
I also think that ChatGPT misleadingly treated the magnetic fields and electric fields as being separate because it was using an ideal MHD model, where this is common due to the simplifications the model makes. In my experience at least Claude catches a lot of confusion and oversights by asking specifically about the differences between the physics and the model.
Regarding The Two Cultures essay:
I have gained so much buttressing context from reading dedicated history about science and math that I have come around to a much blunter position than Snow’s. I claim that an ahistorical technical education is technically deficient. If a person reads no history of math, science, or engineering than they will be a worse mathematician, scientist, or engineer, full stop.
Specialist histories can show how the big problems were really solved over time.[1] They can show how promising paths still wind up being wrong, and the important differences between the successful method and the failed one. They can show how partial solutions relate to the whole problem. They can show how legendary genius once struggled with the same concepts that you now struggle with.
- ^
Usually—usually! As in a majority of the time! - this does not agree with the popular narrative about the problem.
- ^
I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need.
Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn’t been much call for it to date.
Why don’t you expect AGIs to be able to do that too?
I do, I just expect it to take a few iterations. I don’t expect any kind of stable niche for humans after AGI appears.
I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don’t even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like ‘adaptation engineer’ or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay sustainably high in terms of total value, but the amplitude of the value would sort of slide into the few AI relevant niches.
I think this cashes out as Principle A winning out and Principal B winning out looking the same for most people.
Obviously, at least one of those predictions is wrong. That’s what I said in the post.
Does one of them need to be wrong? What stops a situation like only one niche, or a few niches, being high value and the rest not providing enough to eat? This pretty much exactly like how natural selection operates, for example.
I agree fake pictures are harder to threaten with. But consider that the deepfake method makes everyone a potential target, rather than only targeting the population who would fall for the relationship side of the scam.
There are other reasons I think it would be grimly effective, but I am not about to spell it out for team evil.
He also claims that with the rise of deepfakes you can always run the Shaggy defense if the scammer actually does pull the trigger.
With the rise of deepfakes, the scammers can skip steps 1-3, and also more easily target girls.
Chip fabs and electricity generation are capital!
Yes, but so are ice cream trucks and the whirligig rides at the fair. Having “access to capital” is meaningless if you are buying an ice cream truck, but means much if you have a rare earth refinery.
My claim is that the big distinction now is between labor and capital because everyone had about an equally hard time getting labor; when AI replacement happens and that goes away, the next big distinction will be between different types of what we now generically refer to as capital. The term is uselessly broad in my opinion: we need to go down at least one level towards concreteness to talk about the future better.
I agree with the ideas of AI being labor-replacing, and I also agree that the future is likely to be more unequal than the present.
Even so, I strongly predict that the post-AGI future will not be static. Capital will not matter more than ever after AGI: instead I claim it will be a useless category.
The crux of my claim is that when AI replaces labor and buying results is easy, the value will shift to the next biggest bottlenecks in production. Therefore future inequality will be defined by the relationship to these bottlenecks, and the new distinctions will be drawn around them. This will split up existing powers.
In no particular order, here are some more-concrete examples of the kind of thing I am talking about:
Since AI is the major driver of change, the best candidate bottlenecks are the ones for wider deployment of AI. See how Nvidia is now the 2nd or 3rd largest company in the world by market cap: they sell the compute upon which AI depends. Their market cap is larger than any company directly competing in the current AI race. The bottlenecks to compute production are constructing chip fabs; electricity; the availability of rare earth minerals.
About regular human welfare: the lower this goes, the less incentive there is for solidarity among the powers that be. Consider the largest company in the world by revenue, Walmart, who is in the retail business. Amazon is a player in the AI economy as a seller of compute services and through their partnership with Anthropic, but ~80% of their revenue comes from the same sector as Walmart. Right now, both companies have an interest in a growing population with growing wealth and are on the same side. If the population and its buying power begins to shrink, they will be in an existential fight over the remainder, yielding AI-insider/AI-outsider division. See also Google and Facebook, who sell ads that sell stuff to the population. And the agriculture sector, which makes food. And Apple, largest company in the world by market cap, who sells consumer devices. They all benefit from more human welfare rather than less; if it collapses they all die (maybe except for the AI-related parts). Is the weight of their (current) capital going to fall on the side of reducing human welfare?
I also think the AI labor replacement is initially on the side of equality. Consider the law: the existing powers systematically crush regular people in courts because they have access to lots of specialist labor in the form of lawyers. Now, any single person who is a competent user of Claude can feasibly match the output of any traditional legal team, and therefore survive the traditional strategy of dragging out the proceedings with paperwork until the side with less money runs out. The rarest and most expensive labor will probably be the first to be replaced because the profit will be largest. The exclusive access to this labor is fundamental to the power imbalance of wealth inequality, so its replacement is an equalizing force.
This is a fantastic post, immediately leaping into the top 25 of my favorite LessWrong posts all-time, at least.
I have a concrete suggestion for this issue:
They end up spending quite a lot of effort and attention on loudly reiterating why it was impossible, and ~0 effort on figuring how they could have solved it anyway.
I propose switching gears at this point to make “Why is the problem impossible?” the actual focus of their efforts for the remainder of the time period. I predict this will consistently yield partial progress among at least a chunk of the participants.
I suggest thinking about the question of why it is impossible deliberately because I experienced great progress on an idea I had through exactly that mechanism, in a similar condition of not having the relevant physics knowledge. The short version of the story is that I had the idea, almost immediately hit upon a problem that seemed impossible, and then concluded it would never work. Walking down the stairs after right after having concluded it was impossible, I thought to myself “But why is it impossible?” and spent a lot of time following up on that thread. The whole investigation was iterations of that theme—an impossible blocker would appear, I would insist on understanding the impossibility, and every time it would eventually yield (in the sense of a new path forward at least; rarely was it just directly possible instead). As it stands I now have definite concrete angles of attack to make it work, which is the current phase.
My core intuition for why this worked:
Impossibility requires grappling with fundamentals; there is no alternative.
It naturally distinguishes between the problem the approach to the problem.
I gesture in the direction of things like the speed of light, the 2nd law of thermodynamics, and the halting problem to make the claim that fundamental limits are good practice to think about.
I think this post is quite important because it is about Skin in the Game. Normally we love it, but here is the doubly-interesting case of wanting to reduce the financial version in order to allow the space for better thinking.
The content of the question is good by itself as a moment in time of thinking about the problem. The answers to the question are good both for what they contain, and also for what they do not contain, by which I mean what we want to see come up in questions of this kind to answer them better.
As a follow-up, I would like to see a more concrete exploration of options for how to deal with this kind of situation. Sort of like an inverse of the question “how does one bet on AI profitably?” In this case it might be the question “how does one neutralize implicit bets on AI?” By options I mean specific financial vehicles and legal maneuvers.
But if you introduce AI into the mix, you don’t only get to duplicate exactly the ‘AI shaped holes’ in the previous efforts.
I have decided I like the AI shaped holes phraseology, because it highlights the degree to which this is basically a failure in the perception of human managers. There aren’t any AI shaped holes because the entire pitch with AI is we have to tell the AI what shape to take. Even if we constrain ourselves to LLMs, the AI docs literally and exactly describe how to tell it what role to fill.
Let’s say Company A can make AGIs that are drop-in replacements for highly-skilled humans at any existing remote job (including e.g. “company founder”), and no other company can. And Company C is a cloud provider. Then Company A will be able to outbid every other company for Company C’s cloud compute, since Company A is able to turn cloud compute directly into massive revenue. It can just buy more and more cloud compute from C and every other company, funding itself with rapid exponential growth, until the whole world is saturated.
I think this is outside the timeline under consideration. Transforming compute into massive revenue is still gated by the ability of non-AGI enabled customers to decide to spend with Company A; regardless of price the ability of Company C to make more compute available to sell depends quite a bit on the timelines of their contracts with other companies, etc. The ability to outbid the whole rest of the world for commercial compute already crosses the transformational threshold, I claim. This remains true regardless of whether it is a single dominant bidder or several.
I think the timeline we are looking at is from initial launch through the first round of compute-buy. This still leaves all normal customers of compute as bidders, so I would expect the amount of additional compute going to AGI to be a small fraction of the total.
Though let the record reflect based on the other details in the estimate this could still be an enormous increase in the population.
I endorse this movie unironically. It is a classic film for tracking what information you have and don’t have, how many possibilities there are, etc.
Also the filmmaker maintains to this day that they left the truth of the matter in the final scene undefined on purpose, so we are spared the logic being hideously hacked-off to suit the narrative and have to live with the uncertainty instead.
No matter how much I try, I just cannot force myself to buy the premise of replacement of human labor as a reasonable goal. Consider the apocryphal quote:
I’m clearly in the wrong here, because every CEO who talks about the subject talks about faster horses[1], and here we have Mechanize whose goal is to build faster horses, and here is the AI community concerned about the severe societal impacts of digital horse shit.
Why, exactly, are all these people who are into accelerating technical development and automation of the economy working so hard at cramming the AI into the shape of a horse?
For clarity, faster horses here is a metaphor for the AI just replacing human workers at their existing jobs.