I understand why you say this, but I honestly think you are wrong. I think the AGI should be very hands-off, and basically be a “court of last resort” / minarchist state, with the ability (and trusted tendency) to intervene when a truly bad decision is about to be made, but which heavily discourages people from relying on it in any way. I think you’re underestimating how much responsibility humans need in order to be happy.
Firstly happiness is a combination of many things. A fair bit of that is pleasant sensations. Not in pain. Nice food. Comfortable. Pretty surroundings …
It’s an average, not an and gate, so you can still be happy with some things not great, if the rest are good.
Secondly, a lot of people don’t want responsibility. Ie prospective parents who aren’t sure if they want the responsibility. People often want to escape their responsibilities.
Thirdly, I am not responsible for global steel production. Do I have any reason to want some other humans to have that responsibility over no one having it? No, not really. Who holds responsibility in todays world. A few rich and powerful people. But mostly responsibility is held by paperwork monstrosities, where each person just follows arcane bureaucratic rules.
You’re totally right about all this! But still many people do want responsibility to some extent—and re: paperwork monstrosities, this is objectively abnormal for the human race; civilization is utterly alien to our evolved context, and imo mostly a bad thing. The only good thing that’s come out of it is technology, and of that, only medical and agricultural technology is really purely good. People in the state of nature have responsibility roughly evenly distributed among the tribe members, though with more among those who are older or considered more expert in some way.
In the fresh interview, Sutskever said something very close: “I would much rather have a world where people are free to make their own mistakes [...] and AGI provide more like a base safety net”.
However, all the AI products being made today actively take away people’s freedom to “make mistakes”. If people want to stay competitive in the market, they will soon find it obligatory to use ChatGPT to check their strategies, Copilot to write code, Microsoft Business Chat to make business decisions, etc., because all these tools will soon make fewer mistakes than people.
Same in romance: I’d not be surprised if soon people would need to use AI assistance both online and offline (via real-time speech recognition and comms via AirPods or smart glass a-la rizzGPT) to stay competitive on the dating market.
So, if we don’t see these things as part of the future world, why are we introducing them today (which means we already plan to remove these things in the future)? 🤔
I understand why you say this, but I honestly think you are wrong. I think the AGI should be very hands-off, and basically be a “court of last resort” / minarchist state, with the ability (and trusted tendency) to intervene when a truly bad decision is about to be made, but which heavily discourages people from relying on it in any way. I think you’re underestimating how much responsibility humans need in order to be happy.
Firstly happiness is a combination of many things. A fair bit of that is pleasant sensations. Not in pain. Nice food. Comfortable. Pretty surroundings …
It’s an average, not an and gate, so you can still be happy with some things not great, if the rest are good.
Secondly, a lot of people don’t want responsibility. Ie prospective parents who aren’t sure if they want the responsibility. People often want to escape their responsibilities.
Thirdly, I am not responsible for global steel production. Do I have any reason to want some other humans to have that responsibility over no one having it? No, not really. Who holds responsibility in todays world. A few rich and powerful people. But mostly responsibility is held by paperwork monstrosities, where each person just follows arcane bureaucratic rules.
You’re totally right about all this! But still many people do want responsibility to some extent—and re: paperwork monstrosities, this is objectively abnormal for the human race; civilization is utterly alien to our evolved context, and imo mostly a bad thing. The only good thing that’s come out of it is technology, and of that, only medical and agricultural technology is really purely good. People in the state of nature have responsibility roughly evenly distributed among the tribe members, though with more among those who are older or considered more expert in some way.
In the fresh interview, Sutskever said something very close: “I would much rather have a world where people are free to make their own mistakes [...] and AGI provide more like a base safety net”.
However, all the AI products being made today actively take away people’s freedom to “make mistakes”. If people want to stay competitive in the market, they will soon find it obligatory to use ChatGPT to check their strategies, Copilot to write code, Microsoft Business Chat to make business decisions, etc., because all these tools will soon make fewer mistakes than people.
Same in romance: I’d not be surprised if soon people would need to use AI assistance both online and offline (via real-time speech recognition and comms via AirPods or smart glass a-la rizzGPT) to stay competitive on the dating market.
So, if we don’t see these things as part of the future world, why are we introducing them today (which means we already plan to remove these things in the future)? 🤔
Because there is no coherent we. Moloch rules the earth, alongside Mammon. The enslavement of humanity benefits many corporate egregores.