Thanks for writing this up. This is something I think a lot of people are struggling with, and will continue to struggle with as AI advances.
I do have worries about AI, mostly that it will be unaligned with human interests and we’ll build systems that squash us like bugs because they don’t care if we live or die. But I have no worries about AI taking away our purpose.
The desire to feel like one has a purposes is a very human characteristic. I’m not sure that any other animals share our motivation to have a motivation. In fact, past humans seemed to have less of this, too, if reports of extant hunter-gatherer tribes are anything to go by. But we feel like we’re not enough if we don’t have a purpose to serve. Like our lives aren’t worth living if we don’t have a reason to be.
Maybe this was a historically adaptive fear. If you’re in a small band or living in a pre-industrial society, every person had a real cost to existing. Societies existed up against the Malthusian limit, and there was no capacity to feed more mouths. You either contributed to society, or you got cast out, because everyone was in survival mode, and surviving is what we had to do to get here.
But AI could make it so that literally no one has to work ever again. Perhaps we will have no purpose to serve to ensure our continued survival if we get it right. Is that a problem? I don’t think it has to be!
Our minds and cultures are build around the idea that everyone needs to contribute. People internalize this need, and one way it can come out is as feeling like life is not worth living without purpose.
But you do have a purpose, and it’s the same one all living things share: to exist. It is enough to simply be in the world. Everything else is contingent on what it takes to keep existing.
If AI makes it so that no one has to work, that most of us our out of jobs, that we don’t even need to contribute to setting our own direction, that need not necessarily be bad. It could go badly, yes, but it also could be freeing to be as we wish, rather than as we must.
I speak from experience. I had a hard time seeing that simply being is enough. I’ve also met a lot of people who had this same difficulty, because it’s what draws them to places like the Zen center where I practice. And everyone is always surprised to discover, sometimes after many years of meditation, that there was never anything that needed to be done to be worthy of this life, and if we can eliminate the need to do things to get to keep living this life, so that none may need lose it due to accident or illness or confusion or anything else, then all the better.
Thank you for laying out a perspective that balances real concerns about misaligned AI with the assurance that our sense of purpose needn’t be at risk. It’s a helpful reminder that human value doesn’t revolve solely around how “useful” we are in a purely economic sense.
If advanced AI really can shoulder the kinds of tasks that drain our energy and attention, we might be able to redirect ourselves toward deeper pursuits—whether that’s creativity, reflection, or genuine care for one another. Of course, this depends on how seriously we approach ethical issues and alignment work; none of these benefits emerge automatically.
I also like your point about how Zen practice emphasises that our humanity isn’t defined by constant production. In a future where machines handle much of what we’ve traditionally laboured over, the task of finding genuine meaning will still be ours.
Thanks for writing this up. This is something I think a lot of people are struggling with, and will continue to struggle with as AI advances.
I do have worries about AI, mostly that it will be unaligned with human interests and we’ll build systems that squash us like bugs because they don’t care if we live or die. But I have no worries about AI taking away our purpose.
The desire to feel like one has a purposes is a very human characteristic. I’m not sure that any other animals share our motivation to have a motivation. In fact, past humans seemed to have less of this, too, if reports of extant hunter-gatherer tribes are anything to go by. But we feel like we’re not enough if we don’t have a purpose to serve. Like our lives aren’t worth living if we don’t have a reason to be.
Maybe this was a historically adaptive fear. If you’re in a small band or living in a pre-industrial society, every person had a real cost to existing. Societies existed up against the Malthusian limit, and there was no capacity to feed more mouths. You either contributed to society, or you got cast out, because everyone was in survival mode, and surviving is what we had to do to get here.
But AI could make it so that literally no one has to work ever again. Perhaps we will have no purpose to serve to ensure our continued survival if we get it right. Is that a problem? I don’t think it has to be!
Our minds and cultures are build around the idea that everyone needs to contribute. People internalize this need, and one way it can come out is as feeling like life is not worth living without purpose.
But you do have a purpose, and it’s the same one all living things share: to exist. It is enough to simply be in the world. Everything else is contingent on what it takes to keep existing.
If AI makes it so that no one has to work, that most of us our out of jobs, that we don’t even need to contribute to setting our own direction, that need not necessarily be bad. It could go badly, yes, but it also could be freeing to be as we wish, rather than as we must.
I speak from experience. I had a hard time seeing that simply being is enough. I’ve also met a lot of people who had this same difficulty, because it’s what draws them to places like the Zen center where I practice. And everyone is always surprised to discover, sometimes after many years of meditation, that there was never anything that needed to be done to be worthy of this life, and if we can eliminate the need to do things to get to keep living this life, so that none may need lose it due to accident or illness or confusion or anything else, then all the better.
Thank you for laying out a perspective that balances real concerns about misaligned AI with the assurance that our sense of purpose needn’t be at risk. It’s a helpful reminder that human value doesn’t revolve solely around how “useful” we are in a purely economic sense.
If advanced AI really can shoulder the kinds of tasks that drain our energy and attention, we might be able to redirect ourselves toward deeper pursuits—whether that’s creativity, reflection, or genuine care for one another. Of course, this depends on how seriously we approach ethical issues and alignment work; none of these benefits emerge automatically.
I also like your point about how Zen practice emphasises that our humanity isn’t defined by constant production. In a future where machines handle much of what we’ve traditionally laboured over, the task of finding genuine meaning will still be ours.