I just want to bring up another, perhaps irrelevant, thought-group on the internet. It is these old sci-fi nerds who are confused and taken-aback how the modern world doesn’t match their old visions of the future. They see California Tech People as all lumped together.
TESCREAL stands for “transhumanism, extropianism, singularitarianism, cosmism, rationalism (in a very specific context), Effective Altruism, and longtermism.” It was identified by Timnit Gebru, former technical co-lead of the Ethical Artificial Intelligence Team at Google and founder of the Distributed Artificial Intelligence Research Institute (DAIR), and Émile Torres, a philosopher specializing in existential threats to humanity. These are separate but overlapping beliefs that are particularly common in the social and academic circles associated with big tech in California.
In regards to AI, the author of the linked blog, Charles Stross, is a skeptic who thinks LLMs are pure hype. An impotent scam which will go nowhere. He is mad at the UK government for ‘wasting time and money on fake-AI’, sees them as being caught up in a scam.
And the comment section is full of folks who seem to more or less agree.
The funny thing about this to me, is that Charles’ book Accelerando was one of the books that got me to consider the hypothesis, “Ok, but what if we did have intelligent autonomous non-human entities in the future that humanity had to compete economically with? How would that go for us?”
The answer is, as Zvi frequently reiterates, that it goes very badly indeed for humanity. And faster than one might expect.
I bring this up because I don’t expect Charles Stross himself to come to this comment section to share his views, and I think that hearing what your critics have to say can be a valuable exercise.
As for Stross’s talk, he makes at least one outright weird claim, which is that Russian Cosmism played a role in the emergence of 1990s American transhumanism (and even less likely, that it contributed to the “California ideology”, a name for 1990s dot-com techno-optimism coined in imitation of Engels and Marx’s “German ideology”). As far as I know, there is no evidence for this at all; Russian Cosmism is a case of parallel evolution, a transhumanist philosophy which emerged in the cultural context of 19th-century Russian Orthodoxy.
But OK, Stross points to the long history of traffic back and forth between science fiction, real-world science and technology, and various belief systems, and his claim is that the “TESCREAL” cluster of philosophies largely fall into the category of belief systems detached from reality but derived from science fiction.
Now, obviously the broad universe of science fiction does contain many motifs detached from reality. Reality also now contains many things which were previously science fiction! Also, obtaining a philosophy from science fiction is a somewhat different thing than obtaining a technology from science fiction.
I would also point out that many elements of science fiction came from the imaginations of actual scientists and engineers. Freeman Dyson and Hans Moravec weren’t science fiction writers, they were brainstorming about what might be possible right here in sober physical reality; but they also ended up supplying SF writers with something to write about.
I think the position of Stross and his commenters is largely politically determined. First of all, there is a bit of a culture war within SF, between progressive and libertarian traditions. Eliezer linked to a long tweet about it here. This is another case of SF reflecting an external reality—different worldviews and different agendas for society.
Second, one observes resistance to the idea of existential risk from AI, among numerous progressives who have nothing to do with SF. Here’s an example that I recently ran across. The author says both EA and e/acc are to be dismissed as ideologies of the rich, and we should all focus on phenomena like online radicalization, surveillance, and the environmental impact of data centers… Something about the idea of risks from superintelligent AI triggers resistance, in a way that risks from nuclear war or climate change do not.
Sometimes I wonder what it would be like to be a scientist in the late 1800s / early 1900s, who had figured out about the possibility of nuclear bombs but also (in their universe) had calculated that nuclear bombs would ignite the atmosphere if used.
If there were some groups of people actively trying to develop the bomb, and laughing off your warning. And other people were laughing at the whole idea because clearly it’s too preposterous for anyone to build such a device, and even if they did, it certainly wouldn’t be so dangerous.
I just want to bring up another, perhaps irrelevant, thought-group on the internet. It is these old sci-fi nerds who are confused and taken-aback how the modern world doesn’t match their old visions of the future. They see California Tech People as all lumped together.
In regards to AI, the author of the linked blog, Charles Stross, is a skeptic who thinks LLMs are pure hype. An impotent scam which will go nowhere. He is mad at the UK government for ‘wasting time and money on fake-AI’, sees them as being caught up in a scam.
And the comment section is full of folks who seem to more or less agree.
The funny thing about this to me, is that Charles’ book Accelerando was one of the books that got me to consider the hypothesis, “Ok, but what if we did have intelligent autonomous non-human entities in the future that humanity had to compete economically with? How would that go for us?”
The answer is, as Zvi frequently reiterates, that it goes very badly indeed for humanity. And faster than one might expect.
I bring this up because I don’t expect Charles Stross himself to come to this comment section to share his views, and I think that hearing what your critics have to say can be a valuable exercise.
The same thing happened previously, between Eliezer and Greg Egan—Egan distanced himself from MIRI’s views, and wrote a novel in which a parodic mishmash of MIRI, Robin Hanson, and Bryan Caplan appears.
As for Stross’s talk, he makes at least one outright weird claim, which is that Russian Cosmism played a role in the emergence of 1990s American transhumanism (and even less likely, that it contributed to the “California ideology”, a name for 1990s dot-com techno-optimism coined in imitation of Engels and Marx’s “German ideology”). As far as I know, there is no evidence for this at all; Russian Cosmism is a case of parallel evolution, a transhumanist philosophy which emerged in the cultural context of 19th-century Russian Orthodoxy.
But OK, Stross points to the long history of traffic back and forth between science fiction, real-world science and technology, and various belief systems, and his claim is that the “TESCREAL” cluster of philosophies largely fall into the category of belief systems detached from reality but derived from science fiction.
Now, obviously the broad universe of science fiction does contain many motifs detached from reality. Reality also now contains many things which were previously science fiction! Also, obtaining a philosophy from science fiction is a somewhat different thing than obtaining a technology from science fiction.
I would also point out that many elements of science fiction came from the imaginations of actual scientists and engineers. Freeman Dyson and Hans Moravec weren’t science fiction writers, they were brainstorming about what might be possible right here in sober physical reality; but they also ended up supplying SF writers with something to write about.
I think the position of Stross and his commenters is largely politically determined. First of all, there is a bit of a culture war within SF, between progressive and libertarian traditions. Eliezer linked to a long tweet about it here. This is another case of SF reflecting an external reality—different worldviews and different agendas for society.
Second, one observes resistance to the idea of existential risk from AI, among numerous progressives who have nothing to do with SF. Here’s an example that I recently ran across. The author says both EA and e/acc are to be dismissed as ideologies of the rich, and we should all focus on phenomena like online radicalization, surveillance, and the environmental impact of data centers… Something about the idea of risks from superintelligent AI triggers resistance, in a way that risks from nuclear war or climate change do not.
Sometimes I wonder what it would be like to be a scientist in the late 1800s / early 1900s, who had figured out about the possibility of nuclear bombs but also (in their universe) had calculated that nuclear bombs would ignite the atmosphere if used.
If there were some groups of people actively trying to develop the bomb, and laughing off your warning. And other people were laughing at the whole idea because clearly it’s too preposterous for anyone to build such a device, and even if they did, it certainly wouldn’t be so dangerous.