I’m more optimistic about this because a lot of people already don’t care about whether they make the world a better place (apart from sharing joy and sex and friendship with other people, which is a form of service to the world, too), and they seem (mostly) fine with that. I don’t think the meaning crisis and global psychological health will radically worsen just because AI will strip the ability to make a good impact in the world through cognitive achievement from a (relative minority) of people who are still oriented towards this.
I expect a benevolent AGI sovereign would purposefully return a lot of power and responsibility to humans, despite being able to do everything itself, just to give us something to do and an honest feeling that our lives have meaning. I think that actually most people do want to feel as if they are serving their community, and making their local world a better place—in fact this is kind of a fundamental human need—and a world in which that is not possible would be hellish for most people.
It will feel (and will be) a game in low-stakes situations, like when parents allow children to cook dinner occasionally, even if they know that the kids will burn the pie and will forget to season the salad, etc. Superintelligent AI won’t seriously stake its own future on the capability and execution of unaided humans.
So, it could be a sort of game, a competition, “who helped the AI the most”, like in a family, “who cooked the best dish for the dinner”.
Just doing small favours to people around you, sharing laughs with them, or even smiling at them and receiving a smile back, is already a kind of service that makes a local world a better place—so realistically, AI won’t completely strip people of that. However, servicing the world by applying cognitive effort will become a thing of the past.
I understand why you say this, but I honestly think you are wrong. I think the AGI should be very hands-off, and basically be a “court of last resort” / minarchist state, with the ability (and trusted tendency) to intervene when a truly bad decision is about to be made, but which heavily discourages people from relying on it in any way. I think you’re underestimating how much responsibility humans need in order to be happy.
Firstly happiness is a combination of many things. A fair bit of that is pleasant sensations. Not in pain. Nice food. Comfortable. Pretty surroundings …
It’s an average, not an and gate, so you can still be happy with some things not great, if the rest are good.
Secondly, a lot of people don’t want responsibility. Ie prospective parents who aren’t sure if they want the responsibility. People often want to escape their responsibilities.
Thirdly, I am not responsible for global steel production. Do I have any reason to want some other humans to have that responsibility over no one having it? No, not really. Who holds responsibility in todays world. A few rich and powerful people. But mostly responsibility is held by paperwork monstrosities, where each person just follows arcane bureaucratic rules.
You’re totally right about all this! But still many people do want responsibility to some extent—and re: paperwork monstrosities, this is objectively abnormal for the human race; civilization is utterly alien to our evolved context, and imo mostly a bad thing. The only good thing that’s come out of it is technology, and of that, only medical and agricultural technology is really purely good. People in the state of nature have responsibility roughly evenly distributed among the tribe members, though with more among those who are older or considered more expert in some way.
In the fresh interview, Sutskever said something very close: “I would much rather have a world where people are free to make their own mistakes [...] and AGI provide more like a base safety net”.
However, all the AI products being made today actively take away people’s freedom to “make mistakes”. If people want to stay competitive in the market, they will soon find it obligatory to use ChatGPT to check their strategies, Copilot to write code, Microsoft Business Chat to make business decisions, etc., because all these tools will soon make fewer mistakes than people.
Same in romance: I’d not be surprised if soon people would need to use AI assistance both online and offline (via real-time speech recognition and comms via AirPods or smart glass a-la rizzGPT) to stay competitive on the dating market.
So, if we don’t see these things as part of the future world, why are we introducing them today (which means we already plan to remove these things in the future)? 🤔
I’m more optimistic about this because a lot of people already don’t care about whether they make the world a better place (apart from sharing joy and sex and friendship with other people, which is a form of service to the world, too), and they seem (mostly) fine with that. I don’t think the meaning crisis and global psychological health will radically worsen just because AI will strip the ability to make a good impact in the world through cognitive achievement from a (relative minority) of people who are still oriented towards this.
I expect a benevolent AGI sovereign would purposefully return a lot of power and responsibility to humans, despite being able to do everything itself, just to give us something to do and an honest feeling that our lives have meaning. I think that actually most people do want to feel as if they are serving their community, and making their local world a better place—in fact this is kind of a fundamental human need—and a world in which that is not possible would be hellish for most people.
It will feel (and will be) a game in low-stakes situations, like when parents allow children to cook dinner occasionally, even if they know that the kids will burn the pie and will forget to season the salad, etc. Superintelligent AI won’t seriously stake its own future on the capability and execution of unaided humans.
So, it could be a sort of game, a competition, “who helped the AI the most”, like in a family, “who cooked the best dish for the dinner”.
Just doing small favours to people around you, sharing laughs with them, or even smiling at them and receiving a smile back, is already a kind of service that makes a local world a better place—so realistically, AI won’t completely strip people of that. However, servicing the world by applying cognitive effort will become a thing of the past.
I understand why you say this, but I honestly think you are wrong. I think the AGI should be very hands-off, and basically be a “court of last resort” / minarchist state, with the ability (and trusted tendency) to intervene when a truly bad decision is about to be made, but which heavily discourages people from relying on it in any way. I think you’re underestimating how much responsibility humans need in order to be happy.
Firstly happiness is a combination of many things. A fair bit of that is pleasant sensations. Not in pain. Nice food. Comfortable. Pretty surroundings …
It’s an average, not an and gate, so you can still be happy with some things not great, if the rest are good.
Secondly, a lot of people don’t want responsibility. Ie prospective parents who aren’t sure if they want the responsibility. People often want to escape their responsibilities.
Thirdly, I am not responsible for global steel production. Do I have any reason to want some other humans to have that responsibility over no one having it? No, not really. Who holds responsibility in todays world. A few rich and powerful people. But mostly responsibility is held by paperwork monstrosities, where each person just follows arcane bureaucratic rules.
You’re totally right about all this! But still many people do want responsibility to some extent—and re: paperwork monstrosities, this is objectively abnormal for the human race; civilization is utterly alien to our evolved context, and imo mostly a bad thing. The only good thing that’s come out of it is technology, and of that, only medical and agricultural technology is really purely good. People in the state of nature have responsibility roughly evenly distributed among the tribe members, though with more among those who are older or considered more expert in some way.
In the fresh interview, Sutskever said something very close: “I would much rather have a world where people are free to make their own mistakes [...] and AGI provide more like a base safety net”.
However, all the AI products being made today actively take away people’s freedom to “make mistakes”. If people want to stay competitive in the market, they will soon find it obligatory to use ChatGPT to check their strategies, Copilot to write code, Microsoft Business Chat to make business decisions, etc., because all these tools will soon make fewer mistakes than people.
Same in romance: I’d not be surprised if soon people would need to use AI assistance both online and offline (via real-time speech recognition and comms via AirPods or smart glass a-la rizzGPT) to stay competitive on the dating market.
So, if we don’t see these things as part of the future world, why are we introducing them today (which means we already plan to remove these things in the future)? 🤔
Because there is no coherent we. Moloch rules the earth, alongside Mammon. The enslavement of humanity benefits many corporate egregores.