I would like to know what would have happened if, sometime during the Dark Ages let’s say, benevolent and extremely advanced aliens had landed with the intention to fix everything. I would diligently copy and disseminate the entire Wikipedia-equivalent for the generously-divulged scientific and sociological knowledge therein, plus cultural notes on the aliens such that I could write a really keenly plausible sci-fi series.
Not just science fiction and aliens either. Nearly all popular and successful fiction is based around what are effectively modern characters in whatever setting. I remember a paper I read back around the mid-eighties pointing out that Louis L’Amour’s characters were basically just modern Americans with the appropriate historical technology and locations.
As I wrote, I read it in something in the 1980s. Probably, but I ’m not sure, in Olander and Greenberg’s “Robert A Heinlein” or in Franklin’s “Robert A Heinlein: America as Science Fiction”.
Of course you can’t fully describe the scenario, or you would already have your answer, but even so, this question seems tantalizingly underspecified. Fix everything, by what standard? Human goals aren’t going to sync up exactly with alien goals (or why even call them aliens?), so what form does the aliens’ benevolence take? Do they try to help the humans in the way that humans would want to be helped, insofar as that problem has a unique answer? Do they give humanity half the stars, just to be nice? Insofar as there isn’t a unique answer to how-humans-would-want-to-be-helped, how can the aliens avoid engaging in what amounts to cultural imperialism—unilaterially choosing what human civilization develops into? So what kind of imperialism do they choose?
How advanced are these aliens? Maybe I’m working off horribly flawed assumptions, but in truth it seems kind of odd for them to have interstellar travel without superintelligence and uploading. (You say you want to write keenly plausible science fiction, so you are going have to do this kind of analysis.) The alien civilization has to be rich and advanced enough to send out a benevolent rescue ship, and yet not develop superintelligence and send out a colonization wave at near-c to eat the stars and prevent astronomical waste. Maybe the rescue ship itself was sent out at near-c and the colonization wave won’t catch up for a few decades or centuries? Maybe the rescue ship was sent out, and then the home civilization collapsed or died out?---and the rescue ship can’t return or rebuild on its own (not enough fuel or something), so they need some of the Sol system’s resources?
Or maybe there’s something about the aliens’ culture and psychology such that they are capable of developing interstellar travel but not capable of developing superintelligence? I don’t think it should be too surprising if the aliens should be congenitally confused, unable to discover certain concepts. (Compare how the hard problem of consciousness just seems impossible; maybe humans happen to be flawed in such a way such that we can never understand qualia.) So the aliens send their rescue ship, share their science and culture (insofar as alien culture can be shared), and eighty years later, the humans build an FAI. Then what?
OK, I sense cross-purposes here. You’re asking “what would be the most interesting and intelligible form of positive alien contact (in human terms)”, and Zack is asking “what would be the most probable form of positive alien contact”?
(By “positive alien contact”, I mean contact with aliens who have some goal that causes them to care about human values and preferences (think of the Superhappies), as opposed to a Paperclipper that only cares about us as potential resources for or obstacles to making paperclips.)
Keep in mind that what we think of as good sci-fi is generally an example of positing human problems (or allegories for them) in inventive settings, not of describing what might most likely happen in such a setting...
I’m worried that some of my concepts here are a little be shaky and confused in a way that I can’t articulate, but my provisional answer is: because their planet would have to be virtually a duplicate of Earth to get that kind of match. Suppose that my deepest heart’s desire, my lifework, is for me to write a grand romance novel about an actuary who lives in New York and her unusually tall boyfriend. That’s a necessary condition for my ideal universe: it has to contain me writing this beautiful, beautiful novel.
It doesn’t seem all that implausible that powerful aliens would have a goal of “be nice to all sentient creatures,” in which case they might very well help me with my goal in innumerable ways, perhaps by giving me a better word processor, or providing life extension so I can grow up to have a broader experience base with which to write. But I wouldn’t say that this is the same thing as the alien sharing my goals, because if humans had never evolved, it almost certainly wouldn’t have even occurred to the alien to create, from scratch, a human being who writes a grand romance novel about an actuary who lives in New York and her unusually tall boyfriend. A plausible alien is simply not going to spontaneously invent those concepts and put special value on them. Even if they have rough analogues to courtship story or even person who is rewarded for doing economic risk-management calculations, I guarantee you they’re not going to invent New York.
Even if the alien and I end up cooperating in real life, when I picture my ideal universe, and when they picture their ideal universe, they’re going to be different visions. The closest thing I can think of would be for the aliens to have evolved a sort of domain-general niceness, and to have a top-level goal for the universe to be filled with all sorts of diverse life with their own analogues of pleasure or goal-achievement or whatever, which me and my beautiful, beautiful novel would qualify as a special case of. Actually, I might agree with that as a good summary description of my top-level goal. The problem is, there are a lot of details that that summary description doesn’t pin down, which we would expect to differ. Even if the alien and I agree that the universe should blossom with diverse life, we would almost certainly have different rankings of which kinds of possible diverse life get included. If our future lightcone only has room for 10^200 observer-moments, and there are 10^4000 possible observer-moments, then some possible observer-moments won’t get to exist. I would want to ensure that me and my beautiful, beautiful novel get included, whereas the alien would have no advance reason to privilege me and my beautiful, beautiful novel over the quintillions of other possible beings with desires that they think of as their analogue of beautiful, beautiful.
This brings us to the apparent inevitability of something like cultural imperialism. Humans aren’t really optimizers—there doesn’t seem to be one unique human vision for what the universe should look like; there’s going to be room for multiple more-or-less reasonable construals of our volition. That being the case, why shouldn’t even benevolent aliens pick the construal that they like best?
Domain-general niceness works. It’s possible to be nice to and helpful to lots of different kinds of people with lots of different kinds of goals. Think Superhappies except with respect for autonomy.
I would like to know what would have happened if, sometime during the Dark Ages let’s say, benevolent and extremely advanced aliens had landed with the intention to fix everything. I would diligently copy and disseminate the entire Wikipedia-equivalent for the generously-divulged scientific and sociological knowledge therein, plus cultural notes on the aliens such that I could write a really keenly plausible sci-fi series.
A sci-fi series based on real extra-terrestrials would quite possibly be so alien to us that no one would want to read it.
Not just science fiction and aliens either. Nearly all popular and successful fiction is based around what are effectively modern characters in whatever setting. I remember a paper I read back around the mid-eighties pointing out that Louis L’Amour’s characters were basically just modern Americans with the appropriate historical technology and locations.
I’ve found that Umberto Eco’s novels do the best job I’ve seen at avoiding this.
I’d love to see an essay-length expansion on this theme.
As I wrote, I read it in something in the 1980s. Probably, but I ’m not sure, in Olander and Greenberg’s “Robert A Heinlein” or in Franklin’s “Robert A Heinlein: America as Science Fiction”.
I might have to mess with them a bit to get an audience, yes.
Of course you can’t fully describe the scenario, or you would already have your answer, but even so, this question seems tantalizingly underspecified. Fix everything, by what standard? Human goals aren’t going to sync up exactly with alien goals (or why even call them aliens?), so what form does the aliens’ benevolence take? Do they try to help the humans in the way that humans would want to be helped, insofar as that problem has a unique answer? Do they give humanity half the stars, just to be nice? Insofar as there isn’t a unique answer to how-humans-would-want-to-be-helped, how can the aliens avoid engaging in what amounts to cultural imperialism—unilaterially choosing what human civilization develops into? So what kind of imperialism do they choose?
How advanced are these aliens? Maybe I’m working off horribly flawed assumptions, but in truth it seems kind of odd for them to have interstellar travel without superintelligence and uploading. (You say you want to write keenly plausible science fiction, so you are going have to do this kind of analysis.) The alien civilization has to be rich and advanced enough to send out a benevolent rescue ship, and yet not develop superintelligence and send out a colonization wave at near-c to eat the stars and prevent astronomical waste. Maybe the rescue ship itself was sent out at near-c and the colonization wave won’t catch up for a few decades or centuries? Maybe the rescue ship was sent out, and then the home civilization collapsed or died out?---and the rescue ship can’t return or rebuild on its own (not enough fuel or something), so they need some of the Sol system’s resources?
Or maybe there’s something about the aliens’ culture and psychology such that they are capable of developing interstellar travel but not capable of developing superintelligence? I don’t think it should be too surprising if the aliens should be congenitally confused, unable to discover certain concepts. (Compare how the hard problem of consciousness just seems impossible; maybe humans happen to be flawed in such a way such that we can never understand qualia.) So the aliens send their rescue ship, share their science and culture (insofar as alien culture can be shared), and eighty years later, the humans build an FAI. Then what?
Why not, as long as I’m making things up?
Because they are from another planet.
I do not know enough science to address the rest of your complaints.
OK, I sense cross-purposes here. You’re asking “what would be the most interesting and intelligible form of positive alien contact (in human terms)”, and Zack is asking “what would be the most probable form of positive alien contact”?
(By “positive alien contact”, I mean contact with aliens who have some goal that causes them to care about human values and preferences (think of the Superhappies), as opposed to a Paperclipper that only cares about us as potential resources for or obstacles to making paperclips.)
Keep in mind that what we think of as good sci-fi is generally an example of positing human problems (or allegories for them) in inventive settings, not of describing what might most likely happen in such a setting...
I’m worried that some of my concepts here are a little be shaky and confused in a way that I can’t articulate, but my provisional answer is: because their planet would have to be virtually a duplicate of Earth to get that kind of match. Suppose that my deepest heart’s desire, my lifework, is for me to write a grand romance novel about an actuary who lives in New York and her unusually tall boyfriend. That’s a necessary condition for my ideal universe: it has to contain me writing this beautiful, beautiful novel.
It doesn’t seem all that implausible that powerful aliens would have a goal of “be nice to all sentient creatures,” in which case they might very well help me with my goal in innumerable ways, perhaps by giving me a better word processor, or providing life extension so I can grow up to have a broader experience base with which to write. But I wouldn’t say that this is the same thing as the alien sharing my goals, because if humans had never evolved, it almost certainly wouldn’t have even occurred to the alien to create, from scratch, a human being who writes a grand romance novel about an actuary who lives in New York and her unusually tall boyfriend. A plausible alien is simply not going to spontaneously invent those concepts and put special value on them. Even if they have rough analogues to courtship story or even person who is rewarded for doing economic risk-management calculations, I guarantee you they’re not going to invent New York.
Even if the alien and I end up cooperating in real life, when I picture my ideal universe, and when they picture their ideal universe, they’re going to be different visions. The closest thing I can think of would be for the aliens to have evolved a sort of domain-general niceness, and to have a top-level goal for the universe to be filled with all sorts of diverse life with their own analogues of pleasure or goal-achievement or whatever, which me and my beautiful, beautiful novel would qualify as a special case of. Actually, I might agree with that as a good summary description of my top-level goal. The problem is, there are a lot of details that that summary description doesn’t pin down, which we would expect to differ. Even if the alien and I agree that the universe should blossom with diverse life, we would almost certainly have different rankings of which kinds of possible diverse life get included. If our future lightcone only has room for 10^200 observer-moments, and there are 10^4000 possible observer-moments, then some possible observer-moments won’t get to exist. I would want to ensure that me and my beautiful, beautiful novel get included, whereas the alien would have no advance reason to privilege me and my beautiful, beautiful novel over the quintillions of other possible beings with desires that they think of as their analogue of beautiful, beautiful.
This brings us to the apparent inevitability of something like cultural imperialism. Humans aren’t really optimizers—there doesn’t seem to be one unique human vision for what the universe should look like; there’s going to be room for multiple more-or-less reasonable construals of our volition. That being the case, why shouldn’t even benevolent aliens pick the construal that they like best?
Domain-general niceness works. It’s possible to be nice to and helpful to lots of different kinds of people with lots of different kinds of goals. Think Superhappies except with respect for autonomy.