Thriving in the Weird Times: Preparing for the 100X Economy
Epistemic status: Confident in the general concept, but not in the specific details.
The world is changing fast. As soon as 2026[1], we may experience an exponential increase in personal productivity, intelligence, and overall capacity to influence the world, powered by new AI tools. To remain relevant in this new era, individuals must embrace these tools.
In this emerging economy, certain factors will determine one’s ability to excel:
Easily transmitted and quickly acquired factors, such as access to public AI tools like ChatGPT, AutoGPT, and new AI assistants, as well as novel productivity workflows and strategies[2]. Since the disparity in access to these resources among individuals will be minimal, this post will not focus on them.
Resources that are difficult to transmit and slow to acquire, which will be the key differentiators among people in this rapidly evolving technological landscape. Examples include personal datasets, trained mental habits, and the ability to swiftly adapt to new AI tools and workflows. Gaining these resources might take months or even years and cannot be easily condensed into a brief period. For instance, fully realizing the productivity benefits of new AI tools like GitHub Copilot requires individuals to deeply adapt their coding practices.
To succeed in this environment, it is vital to identify and begin acquiring these slow-to-acquire resources now to gain a competitive advantage. It is preferable to possess unused resources than to encounter insurmountable bottlenecks in the future. Here are some essential resources for this new era:
Mental habits: Learn to effectively use new AI tools, verbalize thoughts and ideas, and manage tight feedback loops with AI. Cultivating a habit of continuous learning and adaptation to new technologies is crucial. Becoming a skilled cyborg will be necessary to harness future AI creativity.
Personal datasets: Assembling and curating personal datasets for use with personal assistants or AI model training could become individuals’ most valuable digital assets in the new economy[3]. Start building these datasets now to ensure you have the necessary information for future AI tools. One example is a friend who has recorded everything on his screen for the past two years, which could be leveraged by future AI assistants.
Classic capital: Money will still matter. Access to financial resources allows for flexibility and investment opportunities in new tools, training, and staying ahead of competitors. If the economy begins growing rapidly[4], having significant initial capital is the best way to benefit from this growth. As AI tools advance and become more expensive, subscribing to them will be essential for boosting productivity. The cost of these tools could surpass $1000 per month (recall that GPT4-32k is $1 per call).
To obtain these resources, prioritize and focus on the ones most valuable right now to create compounding returns, and those that will take the longest to acquire. Examples of wise investments at the moment include:
Learning to use LangChain and rewriting AutoGPT to train building AI tools
Regularly selecting and integrating new tools into your workflow
Training in prompt engineering techniques to enhance your intuitions on language models
Developing the habit of verbalizing thoughts and saving them using Otter.ai or Whisper to build your personal dataset
In conclusion, to thrive in the 100X economy and remain relevant amidst rapidly advancing technology, it is crucial to adopt new AI tools and begin acquiring slow-to-acquire resources now. Identifying and prioritizing these resources, such as mental habits and personal datasets, can give individuals a competitive edge and position them for success in the weird times ahead.
Do you think there are important points missing in preparing for this wild future?
- ^
2026 mainly reflects the fact that we have short timelines. This market could be relevant to our prediction of short term economic change :
- ^
A further question to explore is how to filter the ever expanding list of new tools and workflows. I hope LessWrong can stay a place where high quality productivity information is filtered and curated.
- ^
Another possiblitity is that assistants will be good at modelling their user from little interaction, so the large initial dataset will be less useful.
Types of data which could be valuable could be notes database, unstructured voice and screen recordings, measurements à la Quantified self.
- ^
If Roodman’s model of economic growth holds, prepare for serious gains.
My own guess here is that access to capital will become more important than it is today by an order of magnitude.
In the forager era capital barely mattered because almost value was created via labor. With no way to reliable accumulate capital, there was little opportunity to exploit it.
In the farmer era, capital became much more important, mainly in the form of useful land, but labor remained of paramount importance for generating value. If anything, capital made labor more valuable and thus demanded more of it.
In the in industrial era, capital became more important again, and now we saw the relationship to labor change. Individuals could have their labor multiplied by application of capital, and in some cases labor could be replaced by capital, with the remaining labor commanding how that capital was used.
With AI we’ll see another shift. AI is very capital intensive and even more than heavy industry magnifies labor productivity. It seems plausible that in the not distant future most or all labor will be automated away, and all that will matter for economic production is owning capital or having ideas of how to effectively deploy it, since the rest will be fully automated. During the transition labor will matter, but capital will matter more.
So now, as always, it’s a good idea to have a lot of money, and having enough money to invest in capital improvements that can generate returns will matter even more in the near future with labor further marginalized.
My actionable advice would be find ways to possess as much money as you can and be less willing to trade off money for other things in the short term since you’ll soon have the opportunity to deploy it for outsized gains.
It’s useful to note that of the three things you list, only one is rivalrous. Mental habits and personal datasets are not, perhaps, equally available to everyone, but the limit is not about division but just … personal limits. There’s no zero-sum elements where someone gets better mental habits only by someone else getting less.
Classic capital IS rivalrous and, on large scales, zero-sum. Money is only valuable if it moves from one entity to another, and only because nobody can have enough of it.
To the extent that having money is (even more than today) the best way to get more money, it seems likely that the best advice to prepare for that world is to collect “classic capital” in forms that will retain or increase in value as things change. You can always HIRE people (or AIs) with good habits and personal datasets.
Note: like “first, catch a rabbit”, or “first, create the universe” as the first step in making rabbit stew, this advice may be correct, but that doesn’t make it useful.
I think I focused too much on the “competitive” part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.
I thought a bit about datasets before and to me it seems like what needs collecting most is detailed personal preference datasets. E.g. input-output examples of how you generally prefer information to be filtered, processed, communicated to you, refined with your inputs; what are your success criteria for tasks, where are the places in your day flow / thought flow where the thing needs to actively intervene and correct you. Especially in those places where you feel you can benefit from cognitive extensions most, based on your bottlenecks. It could initially be too hard to infer from screen logs alone.
Thanks for this post. It’s refreshing to hear about how this technology will impact our lives in the near future without any references to it killing us all
This doesn’t seem wrong, but it’s extremely thin on “how” and reads like a blog post generated by SEO (which I guess these days means generated by an LLM trained to value what SEO values?).
Like, I know that at some point, one of the GPTs will be useful enough to justify a lawyer spending billable time with it, but this post did not tell me anything about how to get from my current state to the state of being able to analyze whether it’s useful enough, or whether I’m just unskilled, or some other confounder.
The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.
The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It’s one of my first post, so I’m still exploring how to write good quality post. Thank you for the feedback!
What about agentic AGI? You only discuss tools here.
At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.
Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we’ll be leaving the “world as normal but faster” phase where tools are useful, and then what happens next depends on our alignment plan I guess.
OK, I think we are in agreement then. I think we’ll be leaving the “world as normal but faster” phase sooner than you might expect—for example, by the time my own productivity gets a 3x boost even.
We’re in agreement. I’m not sure what’s my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.
The best way for MOST people to “thrive” in the kind of economy you describe would be to dismantle capitalism.
Not downvoting, it’s already negative enough, but it’s a common enough sentiment that I figured I’d explain my negative reaction.
I don’t think there’s a path to that, and I don’t think it’s sufficient even if there were.
The current implementation of capitalism is broken, but there will remain SOME form personal property, and SOME way of varying individual incentive/reward to effective fulfillment of other people’s needs, and SOME mechanism to make decisions about short- vs long-term risk-taking in where one invests time/resources. Whether you call it capitalism or not, freedom of individual choice on topics with cost/risk/benefit tradeoffs implies some kind of market.
There may not be a path, but that doesn’t change the fact that not doing it guarantees misery.
It’s definitely not sufficient. You’d have to replace it with something else. And probably make unrelated changes.
But I was trying to challenge this idea that you were somehow still going to earn your daily bread by selling the product of your labor… presumably to the holders of capital. I mean, the post does mention that you’re best off to be a holder of capital, but that’s not going to be available to most people.
It’s very easy to lose a small pile of capital, and relatively easy to add to a large pile of capital. It always has been, but it’s about to get a lot more so. Capital concentrates. So most people are not going to have enough capital to survive just by owning things, at least not unless the system decrees that everybody owns at least some minimum share no matter what. That’s definitely not capitalism.
So the post is basically about “working for a living”. And that might work through 2026, or 2036, or whatever.
And sure, maybe you can do OK in 2026 or even 2036 by doing what this post suggests. If you do those things, maybe you’ll even feel like you’re moving up in the world. But most actual humans aren’t capable of doing what this post suggests (and many of the rest would be miserable doing it). Some people are going to fall by the wayside. They won’t be using AI assistants; they’ll be doing things that don’t need an AI assistant, but that AI can’t do itself. Which are by no means guaranteed to be anything anybody would want to do.
As time goes on, AI will get smarter and more independent, shrinking the “adapt” niche. And robotics will get better, shrinking the “unautomatable work” niche.
There’s an irreducible cost to employing somebody. You have to feed that person. Some people already can’t produce enough to meet that bar. As the number of things humans can do that AI can’t shrinks drastically, the number of such unemployable humans will rise. Fewer and fewer humans will be able to justify their existence.
Yes, that’s in an absolute sense. People talk about “new jobs made possible by the new technology”. That’s wishful thinking. When machines replaced muscle in the industrial revolution, there was a need for brain. Operating even a simple power tool takes a lot of brain. When machines replace brain, that’s it. Game over. Past performance is not a guarantee of future results.
In the end game (not by 2026), the only value that literally any human will be able to produce above the cost of feeding that person will be things that are valued only for being done by humans.… and valued by the people or entities that actually have something else to trade for them. There aren’t likely to be that many. Humans in general will be no more employable than chimpanzees.
… however, unlike chimpanzees, if you keep capitalism in anything remotely like its current form, humans may very well not be permitted the space or resources to take care of themselves on their own terms. You can’t ignore the larger ultra-efficient economy and trade among yourselves, if everything, down to the ground you’re standing on, is owned by something or somebody that can get more out of it some other way.
That’s not capitalism. Not unless it’s ownership of capital, and really if you want it to look like what the word “capitalism” connotes, it kind of has to be a quite a lot of capital. Enough to sustain yourself from what it produces.
Again, eventually you’re gonna be irrelevant to fulfilling other people’s needs. If there’s no preparation for that, it’s going to come as quite a shock.
The only exception might be the needs of people who are just as frozen out as you are. And there’s no guarantee that you will be in a position either to produce what you or they need, or to accumulate capital of your own, because all the prerequisite resources may be owned by somebody else who’s using them more “effectively”.
We’re headed toward a world in which letting any human make a really major decision about resource allocation would mean inefficient use of the resources. Possibly insupportably inefficient.
We’re not there yet. We’re not going to be there in 2026, either. But we’re heading there faster and faster.
If you want an end-state system where a bunch of benevolent-to-the-point-of-enslavement AIs run everything, supporting humans is a or the major goal for the AIs, an AI’s “consumption” is measured by how much support it gets to give to humans, and the AIs run some kind of market system to see which of them “owns” more resources to do that, then that’s a capitalist system. But humans aren’t the players in that system. And if you’re truly superintelligent, you can probably do better. Markets are a pretty good information processing system, but they’re not perfect.
In the meantime, the things that let capitalism work among humans are falling apart. Once there’s no way to get into the club by using your labor to build up capital from scratch, pre-existing holders of capital become an absolute oligarchy. And capital’s tendency to concentrate means it’s a shrinking oligarchy. And eventually membership in that oligarchy is decided either by inheritance, or by things you did so long ago that basically nobody remembers them. Or possibly no human at all owns anything… sort of an “Accelerando” scenario.
I think that starts to come into being even before the ultimate end game, but in any case it’s going to happen eventually.
That’s not a tenable system, it’s not an equitable system, and only a very small proportion of people could “thrive” under it. It would collapse if not sustained by insane amounts of force. The longer we keep moving toward such a world, the more extreme the collapse is likely to be.
So, yeah, there may not be a path to fixing it, but that means we’re all boned, not that we’re thriving.
[ probably the wrong place for this debate, so I’ll sign off after this. Feel free to respond/rebut/correct me, and I’ll read and do my best to learn, but won’t respond. ]
I’m not sure I said that. It’s quite possible that many people will not be able to provide more value to the consensus of the rest of the world than they consume, and that will be unstable in unpleasant ways. I do pretty strongly expect that, on the long term, each person (or each dunbar-ish-sized group in many cases) must be seen as positive-value to the resource-allocators, whether they be distributed in markets, concentrated in political forms, or alien AIs. Capitalism is more of a result of property rights and optional trade than a chosen cause of such things (though it’s that too—current winners tend to push harder than “natural” resource ownership/allocation mechanisms do).
Oh, yeah. I fully agree that benevolent dictatorship would be great, and I do give some weight that AI will create dictatorship-conditions, either of the AIs or of humans that manage to corral the AIs at the right point in time. I don’t give a lot of weight to the hope that such a system will be all that benevolent to the non-producing class of humans.
This is probably a major point of disagreement. Whether a given resource is capital or not is based on it’s USE, not it’s nature. If individuals can influence the ways that resources are allocated to produce value for others, and be variably rewarded based on the success of that allocation or uses, they are involved in capitalism.
You didn’t, but I thought it was pretty much the entire point of the original article.