I have been saving money until now because of the potential job automation impact first and the rising value of investments once AI really takes off but at this point it just seems better to start burning cash instead on personal holidays/consumption and enjoying the time until the short end (3-5 years). Do you guys think it is too early to say?
Most experts do not believe that we are certainly (>80%) doomed.
It would be an overreaction to give up after the news that politicians and CEO are behaving like politicians and CEO.
The crux is timing, not doom. In the absence of doom, savings likely become similarly useless. But in the absence of superintelligence (doom or not), savings remain important.
Yes, it is too early, and a big reason for this is unless you have very good timing skills, you might bankrupt your own influence over the situation, and most importantly for the purposes of influence, you will probably need large amounts of capital very fast.
Of course, capital is useful in order to exert influence now. Although I would suggest that for a noticeable impact on events, capital or power is needed, which are inaccessible to the vast majority of the population.
But can we end up in a world where the richest 1% or 0.1% will survive, and the rest will die? Unlikely. Even if property rights were respected, such a world would have to turn into a mad hell. Even a world in which only people like Sam Altman and their entourage will survive the singularity seems more likely. But the most likely options should be the extinction of all or the survival of almost all without a strong correlation with current well-being. Am I mistaken?
But can we end up in a world where the richest 1% or 0.1% will survive, and the rest will die? Unlikely. Even if property rights were respected, such a world would have to turn into a mad hell.
Even a world in which only people like Sam Altman and their entourage will survive the singularity seems more likely.
But the most likely options should be the extinction of all or the survival of almost all without a strong correlation with current well-being. Am I mistaken?
I think the answer is yes, and the main way I could see this happening is that we live in an alignment is easy world, property rights are approximately respected for the rich (because they can create robotic armies/supply lines to defend themselves), but anyone else’s property rights are not respected.
I think the core crux is I expect alignment is reasonably easy, and I also think that without massive reform that is unfortunately not that likely, the mechanisms that allow capitalism to help humans by transforming selfish actions into making other people well off will erode rapidly, and I believe that once you are able to make a robot workforce that doesn’t require humans, it becomes necessary to make assumptions about their benevolence in order to survive, and we are really bad at making political systems do work when we have to assume benevolence/trustworthiness.
To be fair, I do agree with this:
Of course, capital is useful in order to exert influence now. Although I would suggest that for a noticeable impact on events, capital or power is needed, which are inaccessible to the vast majority of the population.
I would expect that the absence of a global catastrophe for ~2 years after the creation of AGI would increase the chances of most people’s survival. Especially in a scenario where alignment was easy. After all, then there will be time for political and popular action. We can expect something strange when politicians and their voters finally understand the existential horror of the situation! I don’t know. Attempts to ban all AI? The Butlerian jihad? Nationalization of AI companies? Revolutions and military coups? Everything seems possible. If AI respects the right to property, why shouldn’t it respect the right to UBI if such a law is passed? The rapid growth of the economy will make it possible to feed many. In fact, a world in which someone shrugs their shoulders and allows 99% of the population to die seems obviously unsafe for the remaining 1%.
I think the crux is I don’t believe political will/popular action will matter until AI can clearly automate ~all jobs, for both reasonable and unreasonable reasons, and I think this is far too late to do much of anything by default, in the sense that the point of no return was way earlier.
In order for political action to be useful, it needs to be done when there are real signs that AI could for example automate AI research, not when the event has already happened.
Over the past three years, as my timelines have shortened and my hopes for alignment or coordination have dwindled, I’ve switched over to consumption. I just make sure to keep a long runway, so that I could pivot if AGI progress is somehow halted or sputters out on its own or something.
I think it depends how much of a sacrifice you are making by saving. If your life is materially very much worse today than it could be, because you’re hoping for a payoff 10 or 20+ years hence, I’d say probably save less, but not 0. But money has sharply decreasing marginal utility once your basic needs are met, and I can picture myself blowing all of my savings on a year-long party, and then going “well actually that wasn’t that much fun and my health is much worse and I have no savings, I regret this decision”. On the other hand, I can picture myself deciding to go on a nice holiday for a few weeks this year rather than in a few years, which yes would impact my savings rate but not by that much (it would be a lot compounded over 30 years at standard rates of return between 5-10% per year, but not a lot in the near term), and 5 years hence going “well I half expected to be dead by now, and the economy is exploding such that I am now a billionaire on paper, and if I hadn’t taken that holiday that cost me a single digit number of thousands of dollars, and had invested it instead, I’d have another $50 million… but I don’t regret that decision, I’m a billionaire and an additional 50 million doesn’t make a difference”. Third scenario: The nanobots or whatever are clearly about to kill me within a very short time frame—in my last short span of time before death, what decision will I wish I had made? I’m really not sure, and I think future-me would look back at me today and go “you did the best you knew how with the information you had” regardless of what decision I make. Probably future me will not be going either “I wish I had been more hedonistic” or “I wish I had been less hedonistic”. Probably his wish will be “I wish it hadn’t come to this and I had more time to live.” And if I spend a chunk of my time trying to increase the chance that things go well, rather than doing a hedonism, I bet future-me will be pleased with that decision, even if my ability to affect the outcome is very small. Provided I don’t make myself miserable to save pennies in the meantime, of course.
Any experiences you really really want to have before you die that aren’t ruinously expensive, don’t wait, I’d say. But: my view is we’re entering into a period of rapid change, and I think it’s good to enter into that having lots of slack, where liquid assets are one form of slack. They give you options for how to respond, and options are valuable. I can definitely picture a future a year or two out where I go “If only I had $x, I could take this action/make this outcome more likely. Oh wait, I do have $x. It may have seemed weird or crazy to spend $x in this way in 2025, but I can do it, and I’m going to.” And then the fact I was willing to blow $x on the thing, makes other people sit up and pay attention, in addition to getting the thing done.
We’re not dead yet. Failure is not certain, even when the quest stands upon the edge of a knife. We can still make plans, and keep on refining and trying to implement them.
And a lot can happen in 3-5 years. There could be a terrible-but-not-catastrophic or catastrophic-but-not-existential disaster bad enough to cut through a lot of problem. Specific world leaders could die or resign or get voted out and replaced with someone who is either actually competent, or else committed to overturning their predecessor’s legacy, or something else. We could be lucky and end up with an AGI that’s aligned enough to help us avert the worst outcomes. Heck, there could be observers from a billion-year-old alien civilization stealthily watching from the asteroid belt and willing to intervene to prevent extinction events.
Do I think those examples are likely? No. Is the complete set of unlikely paths to good outcomes collectively unlikely enough to stop caring about the long term future? Also no. And who knows? Maybe the horse will sing.
This is indeed how I’ve been living my life lately. I’m trying to avoid any unacceptable states like ending up in debt or without the ability to sustain myself if I’m wrong about everything but it’s all short-term hedonism aside from that.
I think this is a rather legitimate question to ask—I often dream about retiring to an island for the last few months of my life, hangout with friends and reading my books. And then look to the setting sun until my carbon and silicon are repurposed atom by atom.
However, that is just a dream. I suspect the moral of the story is often at the end: ”Don’t panic. Don’t despair. And don’t give up.”
I have been saving money until now because of the potential job automation impact first and the rising value of investments once AI really takes off but at this point it just seems better to start burning cash instead on personal holidays/consumption and enjoying the time until the short end (3-5 years). Do you guys think it is too early to say?
Most experts do not believe that we are certainly (>80%) doomed. It would be an overreaction to give up after the news that politicians and CEO are behaving like politicians and CEO.
The crux is timing, not doom. In the absence of doom, savings likely become similarly useless. But in the absence of superintelligence (doom or not), savings remain important.
Yes, it is too early, and a big reason for this is unless you have very good timing skills, you might bankrupt your own influence over the situation, and most importantly for the purposes of influence, you will probably need large amounts of capital very fast.
Of course, capital is useful in order to exert influence now. Although I would suggest that for a noticeable impact on events, capital or power is needed, which are inaccessible to the vast majority of the population.
But can we end up in a world where the richest 1% or 0.1% will survive, and the rest will die? Unlikely. Even if property rights were respected, such a world would have to turn into a mad hell.
Even a world in which only people like Sam Altman and their entourage will survive the singularity seems more likely.
But the most likely options should be the extinction of all or the survival of almost all without a strong correlation with current well-being. Am I mistaken?
I think the answer is yes, and the main way I could see this happening is that we live in an alignment is easy world, property rights are approximately respected for the rich (because they can create robotic armies/supply lines to defend themselves), but anyone else’s property rights are not respected.
I think the core crux is I expect alignment is reasonably easy, and I also think that without massive reform that is unfortunately not that likely, the mechanisms that allow capitalism to help humans by transforming selfish actions into making other people well off will erode rapidly, and I believe that once you are able to make a robot workforce that doesn’t require humans, it becomes necessary to make assumptions about their benevolence in order to survive, and we are really bad at making political systems do work when we have to assume benevolence/trustworthiness.
To be fair, I do agree with this:
I would expect that the absence of a global catastrophe for ~2 years after the creation of AGI would increase the chances of most people’s survival. Especially in a scenario where alignment was easy.
After all, then there will be time for political and popular action. We can expect something strange when politicians and their voters finally understand the existential horror of the situation!
I don’t know. Attempts to ban all AI? The Butlerian jihad? Nationalization of AI companies? Revolutions and military coups? Everything seems possible.
If AI respects the right to property, why shouldn’t it respect the right to UBI if such a law is passed? The rapid growth of the economy will make it possible to feed many.
In fact, a world in which someone shrugs their shoulders and allows 99% of the population to die seems obviously unsafe for the remaining 1%.
I think the crux is I don’t believe political will/popular action will matter until AI can clearly automate ~all jobs, for both reasonable and unreasonable reasons, and I think this is far too late to do much of anything by default, in the sense that the point of no return was way earlier.
In order for political action to be useful, it needs to be done when there are real signs that AI could for example automate AI research, not when the event has already happened.
Over the past three years, as my timelines have shortened and my hopes for alignment or coordination have dwindled, I’ve switched over to consumption. I just make sure to keep a long runway, so that I could pivot if AGI progress is somehow halted or sputters out on its own or something.
I think it depends how much of a sacrifice you are making by saving. If your life is materially very much worse today than it could be, because you’re hoping for a payoff 10 or 20+ years hence, I’d say probably save less, but not 0. But money has sharply decreasing marginal utility once your basic needs are met, and I can picture myself blowing all of my savings on a year-long party, and then going “well actually that wasn’t that much fun and my health is much worse and I have no savings, I regret this decision”. On the other hand, I can picture myself deciding to go on a nice holiday for a few weeks this year rather than in a few years, which yes would impact my savings rate but not by that much (it would be a lot compounded over 30 years at standard rates of return between 5-10% per year, but not a lot in the near term), and 5 years hence going “well I half expected to be dead by now, and the economy is exploding such that I am now a billionaire on paper, and if I hadn’t taken that holiday that cost me a single digit number of thousands of dollars, and had invested it instead, I’d have another $50 million… but I don’t regret that decision, I’m a billionaire and an additional 50 million doesn’t make a difference”. Third scenario: The nanobots or whatever are clearly about to kill me within a very short time frame—in my last short span of time before death, what decision will I wish I had made? I’m really not sure, and I think future-me would look back at me today and go “you did the best you knew how with the information you had” regardless of what decision I make. Probably future me will not be going either “I wish I had been more hedonistic” or “I wish I had been less hedonistic”. Probably his wish will be “I wish it hadn’t come to this and I had more time to live.” And if I spend a chunk of my time trying to increase the chance that things go well, rather than doing a hedonism, I bet future-me will be pleased with that decision, even if my ability to affect the outcome is very small. Provided I don’t make myself miserable to save pennies in the meantime, of course.
Any experiences you really really want to have before you die that aren’t ruinously expensive, don’t wait, I’d say. But: my view is we’re entering into a period of rapid change, and I think it’s good to enter into that having lots of slack, where liquid assets are one form of slack. They give you options for how to respond, and options are valuable. I can definitely picture a future a year or two out where I go “If only I had $x, I could take this action/make this outcome more likely. Oh wait, I do have $x. It may have seemed weird or crazy to spend $x in this way in 2025, but I can do it, and I’m going to.” And then the fact I was willing to blow $x on the thing, makes other people sit up and pay attention, in addition to getting the thing done.
We’re not dead yet. Failure is not certain, even when the quest stands upon the edge of a knife. We can still make plans, and keep on refining and trying to implement them.
And a lot can happen in 3-5 years. There could be a terrible-but-not-catastrophic or catastrophic-but-not-existential disaster bad enough to cut through a lot of problem. Specific world leaders could die or resign or get voted out and replaced with someone who is either actually competent, or else committed to overturning their predecessor’s legacy, or something else. We could be lucky and end up with an AGI that’s aligned enough to help us avert the worst outcomes. Heck, there could be observers from a billion-year-old alien civilization stealthily watching from the asteroid belt and willing to intervene to prevent extinction events.
Do I think those examples are likely? No. Is the complete set of unlikely paths to good outcomes collectively unlikely enough to stop caring about the long term future? Also no. And who knows? Maybe the horse will sing.
This is indeed how I’ve been living my life lately. I’m trying to avoid any unacceptable states like ending up in debt or without the ability to sustain myself if I’m wrong about everything but it’s all short-term hedonism aside from that.
I think this is a rather legitimate question to ask—I often dream about retiring to an island for the last few months of my life, hangout with friends and reading my books. And then look to the setting sun until my carbon and silicon are repurposed atom by atom.
However, that is just a dream. I suspect the moral of the story is often at the end:
”Don’t panic. Don’t despair. And don’t give up.”