Thanks for this post, great to have this overview!
I can’t put my finger on whether Laurent Alexandre is an accelerationist—I don’t know his work too well, but he seems to acknowledge at least some AI-risk arguments.
This is a quote (auto-translated) from his new book:
“The political dystopia described by Harari, predicting that the world of tomorrow would be divided into “gods and useless people,” could unfortunately become a social reality.
Regulating a force as monumental as ChatGPT and its successors would require international cooperation. However, the world is at war. Each geopolitical bloc will use the new AIs to manipulate the adversary and develop destructive or manipulative cyber weapons.”
I don’t know Alexandre’s ideas very well, but here’s what I understand: you know how people who don’t like rationalists say they’re just using a veneer of rationality to hide right-wing libertarian beliefs? Well, that’s exactly what Alexandre in fact very openly does, complete with some very embarrassing opinions on the differences in IQ between different parts of the world, that strengthen his position as quite an unsavoury character (the potential reputational harms that would arise as a result of having a caricature of a rationalist be a prominent political actor are left as an exercise to the reader...)
Wikipedia tells me that he likes Bostrom, however, which probably makes him genuinely more aware of AI-related issues than the vast majority of French politicians. However, he also doesn’t expect AGI before 2100, so, until then he’s clearly focused on making sure we can work with AI as much as possible, making sure we can learn to use those superintelligence thingies before they’re strong enough to take our jobs and destroy our democracies, etc… and he’s very insistent that this is an important thing to be doing: if you have shorter timelines than he does (and, like, you do!), then he’s definitely something of an accelerationnist.
Well, I did the thing where I actually go find this guy’s main book (2017, so not his latest) on archive.org and read it. The style is weird, with a lot of “she says this, Google says AGI will be fine, some other guy says it won’t”, and I’m not 100% confident what Alexandre himself believes as far as the details are concerned. But it seems really obvious that his view is at least something like “AI will be super-duper powerful, the idea that perhaps we might not build it does not cross my mind, so we will have AGI eventually, then we’d better have it before the other guys, and make ourselves really smart through eugenics so we’re not left too far behind when the AI comes”. “Enter the Matrix to avoid being swallowed by it”, as he puts it (this is a quote). Judging by his tone, he seems to simply not consider that perhaps we could deliberately avoid building AGI, and to be unaware of most of the finer details of discussions about AI and safety (he also says that telling AI to obey us will result in the AI seeing us as colonizers and revolting against us, and so we should pre-emptively avoid such “anti-silicium racism”. Which is an oversimplification of, like, so many different things.), but some sentences are more like “humanity will have to determine the maximum speed of AI deployment [and it’ll be super hard/impossible because people will want to get the benefits of more IA]”. So, at least he’s aware of the problem. He doesn’t seem to have anything to say beyond that on AI safety issues, however. Oh, and he quotes (and possibly endorses?) the idea that “duh, AI can’t be smarter than us, we have multiple intelligences, Gardner said so”.
Overall, it’s much clearer to me why Lucie calls him an accelerationnist, and it seems like a good characterization.
Ah, interesting. His Guerre des intelligences does seem more obviously accelerationist, but his latest book gives slightly different vibes, so perhaps his views are changing.
But my sense is that he actually seems kind of typical of the polémiste tradition in French intellectual culture, where it’s more about arguing with flair and elegance than developing consistent arguments. So it might be difficult to find a consistent ideology behind his combination of accelerationism, a somewhat pessimistic transhumanism, and moderate AI fear.
Yes, he’s definitely a polemicist, and not a researcher or an expert. By training, he’s a urologist with an MBA or two, and most of what he writes definitely sounds very oversimplified/simplistic.
Thanks for this post, great to have this overview!
I can’t put my finger on whether Laurent Alexandre is an accelerationist—I don’t know his work too well, but he seems to acknowledge at least some AI-risk arguments.
This is a quote (auto-translated) from his new book:
“The political dystopia described by Harari, predicting that the world of tomorrow would be divided into “gods and useless people,” could unfortunately become a social reality.
Regulating a force as monumental as ChatGPT and its successors would require international cooperation. However, the world is at war. Each geopolitical bloc will use the new AIs to manipulate the adversary and develop destructive or manipulative cyber weapons.”
I don’t know Alexandre’s ideas very well, but here’s what I understand: you know how people who don’t like rationalists say they’re just using a veneer of rationality to hide right-wing libertarian beliefs? Well, that’s exactly what Alexandre in fact very openly does, complete with some very embarrassing opinions on the differences in IQ between different parts of the world, that strengthen his position as quite an unsavoury character (the potential reputational harms that would arise as a result of having a caricature of a rationalist be a prominent political actor are left as an exercise to the reader...)
Wikipedia tells me that he likes Bostrom, however, which probably makes him genuinely more aware of AI-related issues than the vast majority of French politicians. However, he also doesn’t expect AGI before 2100, so, until then he’s clearly focused on making sure we can work with AI as much as possible, making sure we can learn to use those superintelligence thingies before they’re strong enough to take our jobs and destroy our democracies, etc… and he’s very insistent that this is an important thing to be doing: if you have shorter timelines than he does (and, like, you do!), then he’s definitely something of an accelerationnist.
Well, I did the thing where I actually go find this guy’s main book (2017, so not his latest) on archive.org and read it. The style is weird, with a lot of “she says this, Google says AGI will be fine, some other guy says it won’t”, and I’m not 100% confident what Alexandre himself believes as far as the details are concerned.
But it seems really obvious that his view is at least something like “AI will be super-duper powerful, the idea that perhaps we might not build it does not cross my mind, so we will have AGI eventually, then we’d better have it before the other guys, and make ourselves really smart through eugenics so we’re not left too far behind when the AI comes”. “Enter the Matrix to avoid being swallowed by it”, as he puts it (this is a quote).
Judging by his tone, he seems to simply not consider that perhaps we could deliberately avoid building AGI, and to be unaware of most of the finer details of discussions about AI and safety (he also says that telling AI to obey us will result in the AI seeing us as colonizers and revolting against us, and so we should pre-emptively avoid such “anti-silicium racism”. Which is an oversimplification of, like, so many different things.), but some sentences are more like “humanity will have to determine the maximum speed of AI deployment [and it’ll be super hard/impossible because people will want to get the benefits of more IA]”. So, at least he’s aware of the problem. He doesn’t seem to have anything to say beyond that on AI safety issues, however.
Oh, and he quotes (and possibly endorses?) the idea that “duh, AI can’t be smarter than us, we have multiple intelligences, Gardner said so”.
Overall, it’s much clearer to me why Lucie calls him an accelerationnist, and it seems like a good characterization.
Ah, interesting. His Guerre des intelligences does seem more obviously accelerationist, but his latest book gives slightly different vibes, so perhaps his views are changing.
But my sense is that he actually seems kind of typical of the polémiste tradition in French intellectual culture, where it’s more about arguing with flair and elegance than developing consistent arguments. So it might be difficult to find a consistent ideology behind his combination of accelerationism, a somewhat pessimistic transhumanism, and moderate AI fear.
Yes, he’s definitely a polemicist, and not a researcher or an expert. By training, he’s a urologist with an MBA or two, and most of what he writes definitely sounds very oversimplified/simplistic.