I don’t see how those novels could have been an inspiration? I’ve read them when I was just awakening (~2005) and even then I noticed the sharp absence of any artificial intelligence. I believe Greg Egan’s idea of the future is still a serious possibility. After all, as with aliens, the only example of something resembling generally intelligent, aware and goal-oriented agents are we ourselves.
If there was an inspiration then I would suspect others to be a more likely source.
I haven’t read the book, but it looks rather like that he portrays this movement as a conspiracy to live off the money of nonconformists that is hidden under a massive amount of writings about rationality and pillowed by the little cherry on the cake that is AI going FOOM (rapture of the nerds).
I don’t see how those novels could have been an inspiration?
They present a seriously posthuman future, with a populace consisting mostly of human uploads and digital substrate native people, as a normal setting. Basically, hardcore computationalist cogsci, computer science mediated total upheaval of the human condition, and observing how life goes on nevertheless instead of bemoaning the awfulness of losing some wetware substrate. Several short stories of non-shallow thought about issues with human uploads and human cognitive modification. Pretty much the same cultural milieu as the SIAI writings are based on.
The ideas about singularity and AI come from Vinge, but I have a hard time coming up with other writers before 2000 that take the same unflinching materialistic stance to human cognition that Egan does, and aren’t saddled by blatantly obvious breaks from reality. Ken MacLeod’s Fall Revolution series maybe.
Basically Egan showed how the place where SIAI wants to go can be inhabitable.
Interesting, it worked pretty much the opposite for me. At the time I read those novels the particular idea of substrate independence seemed naturally to me. Only now I’m beginning to doubt its effectiveness. Not that I doubt substrate independence in general but that uploading might not amplify us, that the bloated nature of our brains is a actual feature. Further we might need a quantum computer the size of a human head to run a simulated brain. The chaotic nature might also not permit much tinkering after you’ve reached a certain state space.
By the way, isn’t there another movement that was influenced by science fiction?
So I assume that’s a crack in the direction of Objectivism, but I think your insight actually applies to a large number of semi-political movements, especially if you see “interesting science fiction”, and “utopian/dystopian literature”, and “political philosophy” as repackagings of basically the same thing.
Part of the political back story of Plato’s Republic is that it documents utopian political theorizing in the presence of Athenian youths. In reality, there were students of Socrates who were part of the Thirty Tyrants… which group was responsible for a political purge in Athens. In Plato’s Apology, there’s a bit about how Socrates didn’t get his hands dirty when ordered to participate in the actual killing but if you want to read critically between the lines you can imagine that his being ordered to drink hemlock was payback for inflicting bad philosophy on the eventual evil leaders of Athens.
Its one of those meta-observations where I’m not sure there’s real meat there or not, but the pattern of philosophers inspiring politics and significant political leaders operating according to some crazy theory seems to exist. Maybe the policiains would have grabbed any old philosophy for cover? Or maybe the philosophy actually determines some of the outcome? I have no solid opinion on that right now, prefering so far to have worked on data accumulation...
Aristotle was Alexander the Great’s tutor. Ayn Rand’s coterie included Alan Greenspan. Nietzsche and the Nazis sort of has to be mentioned even if it trigger’s Godwin. Marx seems to have had something to do with Stalin. Some philosophers at the University of Chicago might be seen as the intellectual grand parents of Cheney et al.
One of the important themes in all this seems to be that “philosophy is connected to bad politics” with alarming frequency—where the philosophers are not or would not even be fans of the political movements and outcomes which claim to take inspiration from their thoughts.
Having read Zendegi, I get the impression from the portrayal of the character Caplan, with an explicit reference to overcoming bias and with the parody of the “benevolent bootstrap project”, that Greg Egan is already not happy with the actions of those he may have inspired and is already trying to distance himself from Singularitarianism in the expectation that things will not work out.
The weird thing is that he says “those people are crazy” and at the same time he says “this neuromorphic AI stuff is morally dangerous and could go horribly wrong”. Which, from conversations with LW people, I mean…
...the warnings Egan seems to be trying to raise with this book are a small part of the reason this issue is attracting smart people to online political organization in an effort to take the problems seriously and deal with the issue in an intellectually and morally responsible fashion. But then Egan implicitly bashes the group of people who are already worrying about the fears he addresses in his book…
...which makes me think he just has something like a very “far mode” view of OB (or at least he had such a view in July or 2009 when he stopped adjusting Zendegi’s content)?
A far mode view of Overcoming Bias could make us appear homogenous, selfish, and highly competent. The character “Caplan” is basically a sociopathic millionaire with a cryonics policy, a very high risk tolerance, and absolutely no akrasia to speak of… he’s some sort of “rationalist ubermensh” twisted into a corporate bad guy. He’s not someone struggling to write several meaningful pages every day on an important topic before the time for planning runs out.
I suspect that if Greg got on the phone with some of us, or attended a LW meetup to talk with people, he would find that mostly we just tend to agree on a lot of basic issues. But the phone call didn’t happen and now the book is published.
One of the reasons I love science fiction is that it says so much about the time and mindset it was written in. I can read scifi from the 1960′s and recognize the themes and pre-occupations and understand how they grew out of 1950′s science fiction and what I’m reading fell out of fashion for 1970′s stuff and so on. Some of it is pretentious, some childish, some beautiful, some is outright political ranting, some is just plain fun. Usually its a mixture. I wouldn’t be surprised if Zendegi is interesting in 2012 for how much it reveals about the preoccupations of people in early 2009.
And the fact that science fiction is working on shorter timescales like this is also something I think is interesting. Shorter science fiction feedback cycles is (weakly) part of what I would expect if concerns about the singularity were signal rather than noise...
Surely Mill and the like can be seen as having some influence on liberalism? I certainly don’t think our current society is so bad as to be comparable to the Nazis or USSR.
I’m also a little unhappy with your characterisation of both Nietzsche and Alexander. For one, Nietzsche’s link to the Nazis was more due to his proto-nazi sister and brother in law who edited and published The Will to Power, using his name and extracts of his notes to support their political ideology.
I also think Alexander wasn’t so bad for his time. True, imperialism isn’t a good thing, but as I’ve been told for the short span of his rule Alexander was a fairly good king who allowed his subjects to follow their own customs and treated them fairly regardless of nationality. I may be mistaken and there might be a bit too much hagiography in the history of Alexander though.
So, who’s worse? Greenspan or Rand? I’m pretty sure that very few people heading the F.R. would have done better than Greenspan… but Rand wrote a few novels that perfectly reflect her pathological psyche. She was a clear case of NPD and abused amphetamines for decades.
I’ve only read Atlas Shrugged.. and that only because I couldn’t put it down. Had to see whether it could get any worse..
The ideas about singularity and AI come from Vinge, but I have a hard time coming up with other writers before 2000 that take the same unflinching materialistic stance to human cognition that Egan does, and aren’t saddled by blatantly obvious breaks from reality.
Egan’s stance is not materialistic in the least. It can be best described as a “what if” of extreme idealism. It has computers without any substrate, as well as universes operating on pure mathematics. You can hardly find a way of being less materialistic than that.
The idea of singularity and AI originates with Stanislaw Lem. Vinge was following his lead.
Egan’s novels do have plenty of themes relevant to transhumanism, though their underlying philosophical suppositions are somewhat dubious at best, as they negate the notion of material reality.
Yeah, ‘materialism’ isn’t perhaps the best word since the being made of atoms part is often irrelevant in Egan’s work. The connotation of materialism is being made of the math that the atoms obey, without any rule-excepting magic, and Egan has that in spades when cogsci is otherwise usually the part in even otherwise hard SF where whatever magical asspull the author needs to move the plot happens.
The idea of singularity and AI originates with Stanislaw Lem. Vinge was following his lead.
I guess you’re talking about Golem XIV? I was talking about what early MIRI was inspired by, and they talked a bunch about Vinge and pretty much nothing about Lem. And I. J. Good’s 1965 Ultraintelligent Machine paper predates Golem.
I don’t see how those novels could have been an inspiration?
Yudkowsky describes Egan’s work as an important influence in Creating Friendly AI, where he comments that a quote from Diaspora “affected my entire train of thought about the Singularity”.
I don’t see how those novels could have been an inspiration? I’ve read them when I was just awakening (~2005) and even then I noticed the sharp absence of any artificial intelligence. I believe Greg Egan’s idea of the future is still a serious possibility. After all, as with aliens, the only example of something resembling generally intelligent, aware and goal-oriented agents are we ourselves.
If there was an inspiration then I would suspect others to be a more likely source.
I haven’t read the book, but it looks rather like that he portrays this movement as a conspiracy to live off the money of nonconformists that is hidden under a massive amount of writings about rationality and pillowed by the little cherry on the cake that is AI going FOOM (rapture of the nerds).
They present a seriously posthuman future, with a populace consisting mostly of human uploads and digital substrate native people, as a normal setting. Basically, hardcore computationalist cogsci, computer science mediated total upheaval of the human condition, and observing how life goes on nevertheless instead of bemoaning the awfulness of losing some wetware substrate. Several short stories of non-shallow thought about issues with human uploads and human cognitive modification. Pretty much the same cultural milieu as the SIAI writings are based on.
The ideas about singularity and AI come from Vinge, but I have a hard time coming up with other writers before 2000 that take the same unflinching materialistic stance to human cognition that Egan does, and aren’t saddled by blatantly obvious breaks from reality. Ken MacLeod’s Fall Revolution series maybe.
Basically Egan showed how the place where SIAI wants to go can be inhabitable.
Interesting, it worked pretty much the opposite for me. At the time I read those novels the particular idea of substrate independence seemed naturally to me. Only now I’m beginning to doubt its effectiveness. Not that I doubt substrate independence in general but that uploading might not amplify us, that the bloated nature of our brains is a actual feature. Further we might need a quantum computer the size of a human head to run a simulated brain. The chaotic nature might also not permit much tinkering after you’ve reached a certain state space.
By the way, isn’t there another movement that was influenced by science fiction?
JK :-)
So I assume that’s a crack in the direction of Objectivism, but I think your insight actually applies to a large number of semi-political movements, especially if you see “interesting science fiction”, and “utopian/dystopian literature”, and “political philosophy” as repackagings of basically the same thing.
Part of the political back story of Plato’s Republic is that it documents utopian political theorizing in the presence of Athenian youths. In reality, there were students of Socrates who were part of the Thirty Tyrants… which group was responsible for a political purge in Athens. In Plato’s Apology, there’s a bit about how Socrates didn’t get his hands dirty when ordered to participate in the actual killing but if you want to read critically between the lines you can imagine that his being ordered to drink hemlock was payback for inflicting bad philosophy on the eventual evil leaders of Athens.
Its one of those meta-observations where I’m not sure there’s real meat there or not, but the pattern of philosophers inspiring politics and significant political leaders operating according to some crazy theory seems to exist. Maybe the policiains would have grabbed any old philosophy for cover? Or maybe the philosophy actually determines some of the outcome? I have no solid opinion on that right now, prefering so far to have worked on data accumulation...
Aristotle was Alexander the Great’s tutor. Ayn Rand’s coterie included Alan Greenspan. Nietzsche and the Nazis sort of has to be mentioned even if it trigger’s Godwin. Marx seems to have had something to do with Stalin. Some philosophers at the University of Chicago might be seen as the intellectual grand parents of Cheney et al.
In trying to find data here, the best example of something that didn’t end up being famously objectionable to someone may be Saint Thomas Moore’s book “A Fruitful and Pleasant Work of the Best State of a Public Weal, and of the New Isle Called Utopia” which served as an inspiration for Vasco de Quiroga’s colonial administration of Michoacán.
One of the important themes in all this seems to be that “philosophy is connected to bad politics” with alarming frequency—where the philosophers are not or would not even be fans of the political movements and outcomes which claim to take inspiration from their thoughts.
Having read Zendegi, I get the impression from the portrayal of the character Caplan, with an explicit reference to overcoming bias and with the parody of the “benevolent bootstrap project”, that Greg Egan is already not happy with the actions of those he may have inspired and is already trying to distance himself from Singularitarianism in the expectation that things will not work out.
The weird thing is that he says “those people are crazy” and at the same time he says “this neuromorphic AI stuff is morally dangerous and could go horribly wrong”. Which, from conversations with LW people, I mean…
...the warnings Egan seems to be trying to raise with this book are a small part of the reason this issue is attracting smart people to online political organization in an effort to take the problems seriously and deal with the issue in an intellectually and morally responsible fashion. But then Egan implicitly bashes the group of people who are already worrying about the fears he addresses in his book…
...which makes me think he just has something like a very “far mode” view of OB (or at least he had such a view in July or 2009 when he stopped adjusting Zendegi’s content)?
A far mode view of Overcoming Bias could make us appear homogenous, selfish, and highly competent. The character “Caplan” is basically a sociopathic millionaire with a cryonics policy, a very high risk tolerance, and absolutely no akrasia to speak of… he’s some sort of “rationalist ubermensh” twisted into a corporate bad guy. He’s not someone struggling to write several meaningful pages every day on an important topic before the time for planning runs out.
I suspect that if Greg got on the phone with some of us, or attended a LW meetup to talk with people, he would find that mostly we just tend to agree on a lot of basic issues. But the phone call didn’t happen and now the book is published.
One of the reasons I love science fiction is that it says so much about the time and mindset it was written in. I can read scifi from the 1960′s and recognize the themes and pre-occupations and understand how they grew out of 1950′s science fiction and what I’m reading fell out of fashion for 1970′s stuff and so on. Some of it is pretentious, some childish, some beautiful, some is outright political ranting, some is just plain fun. Usually its a mixture. I wouldn’t be surprised if Zendegi is interesting in 2012 for how much it reveals about the preoccupations of people in early 2009.
And the fact that science fiction is working on shorter timescales like this is also something I think is interesting. Shorter science fiction feedback cycles is (weakly) part of what I would expect if concerns about the singularity were signal rather than noise...
Surely Mill and the like can be seen as having some influence on liberalism? I certainly don’t think our current society is so bad as to be comparable to the Nazis or USSR.
I’m also a little unhappy with your characterisation of both Nietzsche and Alexander. For one, Nietzsche’s link to the Nazis was more due to his proto-nazi sister and brother in law who edited and published The Will to Power, using his name and extracts of his notes to support their political ideology. I also think Alexander wasn’t so bad for his time. True, imperialism isn’t a good thing, but as I’ve been told for the short span of his rule Alexander was a fairly good king who allowed his subjects to follow their own customs and treated them fairly regardless of nationality. I may be mistaken and there might be a bit too much hagiography in the history of Alexander though.
Tangent: I’m not sure if the comment referred to Objectivism or Scientology.
Scientology was my first thought. But Scientology mostly reminds me of another sort of movement altogether.
So, who’s worse? Greenspan or Rand? I’m pretty sure that very few people heading the F.R. would have done better than Greenspan… but Rand wrote a few novels that perfectly reflect her pathological psyche. She was a clear case of NPD and abused amphetamines for decades.
I’ve only read Atlas Shrugged.. and that only because I couldn’t put it down. Had to see whether it could get any worse..
Egan’s stance is not materialistic in the least. It can be best described as a “what if” of extreme idealism. It has computers without any substrate, as well as universes operating on pure mathematics. You can hardly find a way of being less materialistic than that.
The idea of singularity and AI originates with Stanislaw Lem. Vinge was following his lead.
Egan’s novels do have plenty of themes relevant to transhumanism, though their underlying philosophical suppositions are somewhat dubious at best, as they negate the notion of material reality.
Yeah, ‘materialism’ isn’t perhaps the best word since the being made of atoms part is often irrelevant in Egan’s work. The connotation of materialism is being made of the math that the atoms obey, without any rule-excepting magic, and Egan has that in spades when cogsci is otherwise usually the part in even otherwise hard SF where whatever magical asspull the author needs to move the plot happens.
I guess you’re talking about Golem XIV? I was talking about what early MIRI was inspired by, and they talked a bunch about Vinge and pretty much nothing about Lem. And I. J. Good’s 1965 Ultraintelligent Machine paper predates Golem.
Yudkowsky describes Egan’s work as an important influence in Creating Friendly AI, where he comments that a quote from Diaspora “affected my entire train of thought about the Singularity”.
Though it’s worth noting that this was a “That shouldn’t happen” quote and not a “What a good idea” quote.