These are really good points! I think I was kind of assuming that video, photo, and audio data in some sense should be adequate for spatial processing and some other bottlenecks, but maybe not, and there are probably others I have no idea about.
AnthonyC
I think on a strict interpretation of Christiano’s definition, we’re almost right on the bubble. Suppose we were to take something like Nvidia’s market cap as a very loose proxy for overall growth in accumulated global wealth due to AI. If it keeps doubling or tripling annually, but the return on that capital stays around 4-5% (aka if we assume the market prices things mostly correctly), then there would be a 4 year doubling just before the first 1 year doubling. But if it quadruples or faster annually, there won’t be. Note: my math is probably wrong here, and the metric I’m suggesting is definitely wrong, but I don’t think that affects my thinking on this in principle. I’m sure with some more work I could figure out the exact rate at which the exponent of an exponential can linearly increase while still technically complying with Christiano’s definition.
But really, I don’t think this kind of fine splitting rolls with the underlying differences that distinguish fast and slow takeoff as their definers originally intended. I suspect if we do have a classically-defined “fast” takeoff it’ll never be reflected in GDP or asset market price statistics at all, because the metric will be obsolete before the data is collected.
Note: I don’t actually have a strong opinion or clear preference on whether takeoff will remain smooth or become sharp in this sense.
This is not a new thought, but I continue to find myself deeply confused by how so many people think the world does not contain enough data to train a mind to be highly smart and capable in at least most fields of human knowledge. Or, at least enough to train it to the point where reaching competence requires only a modest amount of hands-on practice and deliberate feedback. Like, I know the scaling pattern is going to be different for AI vs humans, but at a deep level, how do these people think schools and careers work?
I had a conversation with Claude today about something I was researching for work. I tried my best to phrase questions neutrally and not bias the responses, and it pretty much re-derived a lot of the things I’ve been trying to tell my coworkers and clients for years. I’d always wondered if maybe I was crazy and missing the obvious counterarguments, but now I wonder that a little less. I still don’t expect it to really convince anyone who wasn’t already on board.
It reminded me a bit of the old joke about the math professor who says, “This is trivial to prove.” A student asks, “Is it really trivial?” The professor stops lecturing and starts scribbling notes. Half an hour later he looks up and says, “Yes, it’s trivial,” then moves on with the rest of the planned lecture.
I’m very much looking forward to when I can upload this kind of transcript to something like Deep Research and say “Write a much more detailed report with lots of references about all of these questions. Also make me a slide deck and talk track to present it to these different kinds of audiences.”
Then the company is just being stupid
Maybe they are, but I think the word “just” assumes that not being stupid is much easier than it actually is. Often the company is stupid without any individual employees/managers/executives being stupid or being empowered to fix the stupidity, in a context where no one has the convening power to bring together a sufficient set of stakeholders in some larger system to fix the problem without that costing much more than it is worth.
Some company stupidity comes from individual executives and managers not being capable (because they’re human) of absorbing all information about what’s going on in different branches of the company and finding ways to make positive-sum trades that seem obvious to outsiders (this is especially common in large conglomerates). I encounter this all the time as a consultant, and the amount of inertia that needs to be overcome to improve it can be huge.
Some comes from having to comply with all kinds of stupid and outdated and confusing laws (e.g. “The meeting is required because this is how the tax code is written because that’s how they did it before email and before we moved the factory away from the head offices, and good luck getting the government to change that), sometimes while also trying to be even-handed to employees living in different jurisdictions with different laws (e.g. “Oh, well, the meeting is mandatory in city A and we like to have a unified policy about meetings across the company, but we’re not allowed to provide or reimburse for the tuxedos in country B, and state C has a law that if we raised country B’s wages to pay for the tuxedo we’d have to do it for everyone, and we can’t afford that”).
Got it. Then I agree with that. I’m curious if you’ve thought about where you’d put lower and upper bound estimates on capabilities before hitting that bottleneck?
This is all true, but I’m not sure the claimed implications are so certain. The problem is, different minds can gain different levels of insight out of the same data and tools.
First, we should assume humanity has enough data to enable the best human minds to reach the highest levels of every capability available to humans very very little real-world feedback. It’s not ASI in the full sense, but there has never been a human mind that contained all such abilities at once, let alone with an AI’s other default advantages.
Second, it seems extremely unlikely to me that the available data does not include patterns no human has ever found and understood. All collected data ha[s] yet to be completely correlated and put together in all possible relationships. I don’t have a strong sense of the limits of what should be possible with current data. At minimum I expect an ASI to have better pure and applied math tools to apply to any task, and require less data than we do for any given purpose.
Third, with proper tool support, I’m not sure how much physical experimentation and feedback can be substituted with high-quality simulation using software based on known physics, chemistry, and biology. At minimum, this should enable answering a lot of questions that current humanity knows how to answer by formulaic investigation but has never specifically asked or bothered writing down an answer to.
To me this indicates that at the limit of enough compute with better training methods, AI should be able to push at least somewhat beyond the limits of what humans have ever concluded from available data, in every field, before needing to obtain any additional, new data.
Reading your comment and then rereading mine, I think I’ve been doing a terrible job explaining myself. I am not generally in favor of central planning, and am generally in favor of permitting reform, more utility scale solar, fewer subsidies, removal of net metering, and introduction of real time electricity pricing.
What I haven’t been commenting on is which things I think are going to happen whether I like it or not, which things I think would be good but only if we also remove the other distortions they currently counterbalance, and which I don’t think are politically feasible regardless of what their practical impacts would be.
I think within a few years it will become clear to many farmers that agrivoltaics would be a net benefit to themselves, so long as policy doesn’t stand in their way. There’s a lot more buried in that caveat than I feel like going into here, though.
FWIW, I agree with that. But, while land is not scarce in the US, long distance transmission capacity is. There are definitely places where putting solar on roofs is cheaper, or at least faster and easier, than getting large amounts of utility scale solar permitted and building the transmission capacity to bring it to where the demand is.
And I don’t just think agrivoltaics is cool. I think it dodges a lot of usually-bogus-but-impactful objections that so many large scale new construction projects get hit with.
Excellent comment, spells out a lot of thoughts I’d been dancing around for a while better than I had.
-- Avoid tying up capital in illiquid plans. One exception is housing since ‘land on Holy Terra’ still seems quite valuable in many scenarios.
This is the step I’m on. Just bought land after saving up for several years while being nomadic, planning on building a small house soon in such a way that I can quickly make myself minimally dependent on outside resources if I need to. In any AI scenario that respects property rights, this seems valuable to me.
-- Make whatever spiritual preparations you can, whatever spirituality means to you. If you are inclined to Buddhism meditate. Practice loving kindness. Go to church if you are Christian. Talk to your loved ones. Even if you are atheist you need to prepare your metaphorical spirit for what may come. Physical health may also be valuable here.
This I need to double down on. I’ve been more focused on trying to get others close to me on board with my expectations of impending weirdness, with moderate success.
The idea that AIs would go around killing everyone instead of just doing what we tell them to do seems like science fiction.
I’ve had this experience too. The part that baffles me about it is the seeming lack of awareness of the gap between “what we tell them to do” and “what we want them to do.” This gap isn’t sci-fi, it already exists in very clear ways (and should be very familiar to anyone who has ever written any code of any kind).
I have (non-AI-expert) colleagues that I’ve talked to about LLM use, where they dismiss the response to a prompt as nonsense, so I ask to see the chat logs, and I dig into it for 5 minutes. Then I inform them that actually, the LLM’s answer is correct, you didn’t ask what you thought you were asking, you missed an opportunity to learn something new, and also here’s the three-times-as-long version of the prompt that gives enough context to actually do what you expected.
Edit to add: I’m also very much a fan of panko toasted in butter in a pan as a topping. Can be made in advance and left in a bowl next to the mac and cheese for everyone to use however much they want.
Yes, at conferences I’ve been to the discussion is increasingly not “How will we afford all the long term energy storage?” so much as “how much of a role will there be for long term energy storage?”
Personally I’m fairly confident that we’ll eventually need at least 4-16 hrs energy storage in most places, and more in some grids, but I suspect that many places will be able to muddle and kludge their way through most multi-day storage needs with a bunch of other partial solutions that generate power or shift demand.
One of many cases where it’s much easier to predict the long-term trajectory than the path to get there, and most people still don’t.
I like to put the numbers in a form that less mathy folks seem to find intuitive. If you wanted to replace all of the world’s current primary energy use with current solar panels, and had enough storage to make it all work, then the land area you’d need is approximately South Korea. Huge, but also not really that big. (Note: current global solar panel manufacturing capacity is enough to get us about a half to a third of the way there if we fully utilize it over the next 25 years).
In practice I think over the next handful of decades we’re going to need 3-10x that much electricity, but even that doesn’t really change the conclusion, just the path. But also, we can plausibly expect solar panel efficiencies and capacity factors to go up as we start moving towards better types of PV tech. For example, based on already demonstrated performance values, a 2 or 3 junction tandem bifacial perovskite solar panel (which no one currently manufactures at scale, and which seemed implausible to most people including me even two years ago) could get you close to double the current kWh/m2 we get from silicon, and the power would be spread much less unevenly throughout the day and year.
Context: right now gas peaker plants with ~10% utilization have LCOE of about 20 cents/kWh, about 3-5x most other energy sources. I think in the proposed scenario here we’d be more like 20-40% utilization, since we’d also get some use out of these systems overnight night and in winter.
If this became much more common and people had to pay such variable prices, we’d also be able to do a lot more load shifting to minimize the impact on overall energy costs (when to dry clothes and heat water, using phase change materials in HVAC, using thermal storage in industrial facilities’ systems, etc.).
I agree with this reasoning.
I’d add that part of the answer is: as various other relevant technologies become cheaper, both the solar farm and nuclear plant operators (and/or their customers) are going to invest in some combination of batteries and electrolyzers (probably SOECs for nuclear to use some of the excess heat) and carbon capture equipment and other things in order to make other useful products (methanol, ammonia, fuels, other chemicals, steel, aluminum, etc.) using the excess capacity.
Personally I’m not a fan of the pasta texture of baked mac and cheese, but I’ve definitely sauced the cooked pasta, topped with cheese, and broiled it. That’s fast, and you could spread it across multiple pans so it has more surface area. I suspect a blow torch could also work?
Our solar farms are not yet visible from space; we don’t yet have patches of desert turning black.
Keep in mind that as best we have researched so far, agrivoltaics enable dual land use, and for some crops in some environments can increase crop yields and lower water consumption. It is not obvious that meeting most electricity demand with solar requires all that much new land, if we start to make effective use of both farmland and rooftops. From what I’ve read it looks like depending on a whole lot of factors you can get about 80% as much electricity as a dedicated solar farm and 80-160% as much crop yield as a regular farm at the same time. This seems to be true even for corn (5-10% decrease in yield) and pasture (increase in yield). When you consider that ~8% of the world’s cropland is used for ethanol production (>30% for corn in the US), this suggests that a switch towards solar electric for vehicles and away from biofuels might plausibly keep land requirements for farming constant or even reduce the total.
I have absolutely no confidence in any government’s capacity to actually structure markets, regulations, and incentives in a way that allows us to realize anything like an optimal food and energy system. But, if there were a food-and-energy-czar actually designing such a system, this problem has several foreseeable solutions. And as both solar panels and batteries continue to get cheaper and pressure to decarbonize increases, I think we’ll stumble towards something like this anyway.
It is a lot of assumption and conjecture, that’s true. But it is not all conjecture and assumptions. When comparative advantage applies despite one side having an absolute advantage, we know why it applies. We can point to which premises of the theory are load-bearing, and know what happens when we break those premises. We can point to examples within the range of scenarios that exist among humans, where it doesn’t apply, without ever considering what other capabilities an ASI might have.
I will say I do think there’s a bit of misdirection, not by you, but by a lot of the people who like to talk about comparative advantage in this context, to the point that I find it almost funny that it’s the people questioning premises (like this post does) getting accused of making assumptions and conjectures. I’ve read a number of articles that start by talking about how comparative advantage normally means there’s value in one agent’s labor even when another has absolute advantage, which is of course true. Then they simply assume the necessary premises apply in the context of humans and ASI, without actually ever investigating that assumption, looking for limits and edge cases, or asking what actually happens if and when they don’t hold. In other words, the articles I’ve read, aren’t trying to figure out whether comparative advantage is likely to apply in this case. They’re simply assuming it will, and that those questioning this assumption or asking about the probability and conditions of it holding don’t understand the underlying theory.
For comparative advantage to apply, there are conditions. Breaking the conditions doesn’t always break comparative advantage, of course, because none of them perfectly apply in real life ever, but they are the openings that allow it to sometimes not apply. Many of these are predictably broken more often when dealing with ASI, meaning there will be more examples where comparative advantage considerations do not control the outcome.
A) Perfect factor mobility within but none between countries.
B) Zero transportation costs.
Plausibly these two apply about as well to the ASI scenario as among humans? Although with labor as a factor, human skill and knowledge act as limiters in ways that just don’t apply to ASI.
C) Constant returns to scale—untrue in general, but even small discrepancies would be much more significant if ASI typically operates at much larger o much more finely tuned scale than humans can.
D) No externalities—potentially very different in ASI scenario, since methods used for production will also be very different in many cases, and externalities will have very different impacts on ASI vs on humans.
E) Perfect information—theoretically impossible in ASI scenario, ASI will have better information and understanding thereof
F) Equivalent products that differ only in price—not true in general, quality varies by source, and ASI amplifies this gap.
For me, the relevant questions, given all this, are 1) Will comparative advantage still favor ASI hiring humans for any given tasks? 2) If so, will the wage at which ASI is better off choosing to pay humans be at or above subsistence? 3) If so, are there enough such scenarios to support the current human population? 4) Will 1-3 continue to hold in the long run? 5) Are we confident enough in 1-4 for these considerations to meaningfully affect our strategy in developing and deploying AI systems of various sorts?
I happily grant that (1) is likely. (2) is possible but I find it doubtful except in early transitional periods. (3)-(4) seem very, very implausible to me. (5) I don’t know enough about to begin to think about concretely, which means I have to assume “no” to avoid doing very stupid things.
This is really interesting! And lines up with my own anecdotal experience of watching candles make my walls sooty over the course of just a couple of years.
Out of curiosity, are there any studies on the effects of wood fires at the community level? How far apart do housing units need to be for the effect of smoke from the chimney on other houses to be negligible?