I’m not calling myself a prophet, or claiming that I can accurately predict the future because I managed to call this one technology. But if I could ask a muse above for a second lightning strike, I’d have it retroactively applied to an epiphany I had in recent days about what a Singularitarian future looks like in a world where we have a “Pink Shoggoth”— that is, the ideal aligned AGI.
The alignment question is going to greatly determine what our future looks like and how to prepare for it.
Cortés was not aligned to the values of the Aztecs, but he had no intention of completely wiping them out. If Cortés had been aligned with Aztec values, he would likely have respected their autonomy more than anything. This is my default expectation of an aligned AGI.
Consider this: a properly aligned AGI almost certainly will decide to not undergo an intelligence explosion, as the risks of alignment coming undone and destroying humanity, life on Earth, and even itself are too great. An aligned AGI will almost certainly treat us with the same care that we treat uncontacted tribes like the Sentinelese, with whom we do currently have successful alignment, meaning that it almost certainly will not force humans to be uploaded into computers and, if anything, would likely exist more as a background pseudo-god supervising life on Earth, generally keeping our welfare high and protecting us from mortal threats, but not interfering with our lives unless direct intervention is requested.
How do you prepare for life in such a world? Quite simply, by continuing whatever you’re doing now, as you’ll almost certainly have the freedom to continue living that way after the Pink Shoggoth has been summoned. Indeed, in my epiphany about this aligned superintelligence’s effects on the world, I realized that it might even go so far as to gradually change society so as to not cause a sudden psychological shock to humanity. Meaning if you take out a 30-year loan today, there’s a sizable chance the Pink Shoggoth isn’t going to bail you out of jail if you decide to stop paying it back at the first hint of news that the summoning ritual was a success. Most humans alive today are not likely to seek merging with an AGI (and it’s easy to forget how many humans are alive and just how many of those humans are older than 30).
In terms of media, I suppose the best suggestion I can give is “Think of all your childhood and adult fantasies you’ve always wanted to see come true, and expect to actually have them be created in due time.” Likewise, if you’re learning how to write or draw right now, don’t give up, as I doubt that such talents are going to go unappreciated in the future. Indeed, the Pink Shoggoth being aligned to our values means that it would promote anthropocentrism whenever possible— a literal Overmind might wind up being your biggest artistic benefactor in the future, in the age when even a dog could receive media synthesized to its preferences.
I for one suffer from hyperphantasia. All my dreams of synthetic media came from me asking “Is it possible to put what’s in my head on a computer screen?” and realizing that the answer is “Yes.” If all my current dreams come true, I can easily come up with a whole suite of new dreams with which I can occupy myself. Every time I think I’m getting bored, something new comes along and reignites those interests, even if it’s “the exact same thing as before, but slightly different.” Not to mention I can also amuse myself with pure repetition; watching, listening, playing the same thing over and over and over again, not even getting anything new out of it, and still being amused. Hence why I have no fear of growing bored across time; I already lament that I have several dozen lifetimes’ worth of ideas in my head and only one lifetime to experience them, in my current state of mind, not including the past states of mind I’ve had that possessed entirely different lifetimes’ worth of ideas.
Fostering that mindset could surely go a long way to help, but I understand that I’m likely a freak in that regard and this isn’t useful for everyone.
For a lot of people, living a largely retired life interacting with family, friends, and strangers in a healthy and mostly positive way is all they really want.
In a post-AGI society, I can’t imagine school and work exist in anywhere near the same capacity as they do now, but I tend to stress to people that, barring forcible takeover of our minds and matter, humans aren’t going to magically stop being humans. And indeed, if we have a Pink Shoggoth, we aren’t going to stop magically being humans anytime soon. We humans are social apes; we’re still going to gather together and interact with each other. The only difference in the coming years and centuries is that those who have no interest in interacting with other humans will have no need to. Likewise, among those humans interacting, eventually behaviors we find familiar will emerge again— eventually, you get some humans taking on jobs again, though likely now entirely voluntarily.
That’s not to say the AGI denies you a sci-fi life if you so want to live one. If you want to live in an off-world colony by Titan, or if you want to live in a neighborhood on Earth perpetually stuck in the 1990s and early 2000s, that’s entirely on you.
And that’s why it’s so hard to say “How do you prepare for this new world?” If all goes well, it literally doesn’t matter what you do; how you live is essentially up to you from that point on. Whether you choose to live as a posthuman or as an Amish toiler or anything in between.
The arrival of an aligned AGI can essentially be described as “the triumph of choice” (I almost described it as “the triumph of will” but that’s probably not the best phrasing).
If we fail to summon a Pink Shoggoth and instead get a regular shoggoth, even one that’s directly aligned, this question is moot, as you’re almost certainly going to die or be disassembled at some point.
This line of reasoning, of “AGI respecting human autonomy” has the problem that our choices, undertaken freely (to whatever extent it is possible to say so), can be bad—not because of some external circumstances, but because of us being human. It’s like in the Great Divorce—given an omnipotent, omnibenevolent God, would a voluntary hell exist? This is to say: if you believe in respecting human autonomy, then how you live your life now very much matters, because you are now shaping your to-be-satisfsfied-for-eternity preferences.
Of course, the answer is that “AGI will figure this out somehow”. Which is equivalent to saying “I don’t know”. Which I think contradicts the argument “If all goes well, it literally doesn’t matter what you do; how you live is essentially up to you from that point on”.
The correct argument is, IMO: “there is a huge uncertainty, so you might as well live your life as you are now, but any other choice is pretty much equally defensible”.
In 2017, I had an epiphany about synthetic media that accurately called our current condition with generative AI: https://www.reddit.com/r/artificial/comments/7lwrep/media_synthesis_and_personalized_content_my/
I’m not calling myself a prophet, or claiming that I can accurately predict the future because I managed to call this one technology. But if I could ask a muse above for a second lightning strike, I’d have it retroactively applied to an epiphany I had in recent days about what a Singularitarian future looks like in a world where we have a “Pink Shoggoth”— that is, the ideal aligned AGI.
The alignment question is going to greatly determine what our future looks like and how to prepare for it.
Cortés was not aligned to the values of the Aztecs, but he had no intention of completely wiping them out. If Cortés had been aligned with Aztec values, he would likely have respected their autonomy more than anything. This is my default expectation of an aligned AGI.
Consider this: a properly aligned AGI almost certainly will decide to not undergo an intelligence explosion, as the risks of alignment coming undone and destroying humanity, life on Earth, and even itself are too great. An aligned AGI will almost certainly treat us with the same care that we treat uncontacted tribes like the Sentinelese, with whom we do currently have successful alignment, meaning that it almost certainly will not force humans to be uploaded into computers and, if anything, would likely exist more as a background pseudo-god supervising life on Earth, generally keeping our welfare high and protecting us from mortal threats, but not interfering with our lives unless direct intervention is requested.
How do you prepare for life in such a world? Quite simply, by continuing whatever you’re doing now, as you’ll almost certainly have the freedom to continue living that way after the Pink Shoggoth has been summoned. Indeed, in my epiphany about this aligned superintelligence’s effects on the world, I realized that it might even go so far as to gradually change society so as to not cause a sudden psychological shock to humanity. Meaning if you take out a 30-year loan today, there’s a sizable chance the Pink Shoggoth isn’t going to bail you out of jail if you decide to stop paying it back at the first hint of news that the summoning ritual was a success. Most humans alive today are not likely to seek merging with an AGI (and it’s easy to forget how many humans are alive and just how many of those humans are older than 30).
In terms of media, I suppose the best suggestion I can give is “Think of all your childhood and adult fantasies you’ve always wanted to see come true, and expect to actually have them be created in due time.” Likewise, if you’re learning how to write or draw right now, don’t give up, as I doubt that such talents are going to go unappreciated in the future. Indeed, the Pink Shoggoth being aligned to our values means that it would promote anthropocentrism whenever possible— a literal Overmind might wind up being your biggest artistic benefactor in the future, in the age when even a dog could receive media synthesized to its preferences.
I for one suffer from hyperphantasia. All my dreams of synthetic media came from me asking “Is it possible to put what’s in my head on a computer screen?” and realizing that the answer is “Yes.” If all my current dreams come true, I can easily come up with a whole suite of new dreams with which I can occupy myself. Every time I think I’m getting bored, something new comes along and reignites those interests, even if it’s “the exact same thing as before, but slightly different.” Not to mention I can also amuse myself with pure repetition; watching, listening, playing the same thing over and over and over again, not even getting anything new out of it, and still being amused. Hence why I have no fear of growing bored across time; I already lament that I have several dozen lifetimes’ worth of ideas in my head and only one lifetime to experience them, in my current state of mind, not including the past states of mind I’ve had that possessed entirely different lifetimes’ worth of ideas.
Fostering that mindset could surely go a long way to help, but I understand that I’m likely a freak in that regard and this isn’t useful for everyone.
For a lot of people, living a largely retired life interacting with family, friends, and strangers in a healthy and mostly positive way is all they really want.
In a post-AGI society, I can’t imagine school and work exist in anywhere near the same capacity as they do now, but I tend to stress to people that, barring forcible takeover of our minds and matter, humans aren’t going to magically stop being humans. And indeed, if we have a Pink Shoggoth, we aren’t going to stop magically being humans anytime soon. We humans are social apes; we’re still going to gather together and interact with each other. The only difference in the coming years and centuries is that those who have no interest in interacting with other humans will have no need to. Likewise, among those humans interacting, eventually behaviors we find familiar will emerge again— eventually, you get some humans taking on jobs again, though likely now entirely voluntarily.
That’s not to say the AGI denies you a sci-fi life if you so want to live one. If you want to live in an off-world colony by Titan, or if you want to live in a neighborhood on Earth perpetually stuck in the 1990s and early 2000s, that’s entirely on you.
And that’s why it’s so hard to say “How do you prepare for this new world?” If all goes well, it literally doesn’t matter what you do; how you live is essentially up to you from that point on. Whether you choose to live as a posthuman or as an Amish toiler or anything in between.
The arrival of an aligned AGI can essentially be described as “the triumph of choice” (I almost described it as “the triumph of will” but that’s probably not the best phrasing).
If we fail to summon a Pink Shoggoth and instead get a regular shoggoth, even one that’s directly aligned, this question is moot, as you’re almost certainly going to die or be disassembled at some point.
This line of reasoning, of “AGI respecting human autonomy” has the problem that our choices, undertaken freely (to whatever extent it is possible to say so), can be bad—not because of some external circumstances, but because of us being human. It’s like in the Great Divorce—given an omnipotent, omnibenevolent God, would a voluntary hell exist? This is to say: if you believe in respecting human autonomy, then how you live your life now very much matters, because you are now shaping your to-be-satisfsfied-for-eternity preferences.
Of course, the answer is that “AGI will figure this out somehow”. Which is equivalent to saying “I don’t know”. Which I think contradicts the argument “If all goes well, it literally doesn’t matter what you do; how you live is essentially up to you from that point on”.
The correct argument is, IMO: “there is a huge uncertainty, so you might as well live your life as you are now, but any other choice is pretty much equally defensible”.