Well, I did the thing where I actually go find this guy’s main book (2017, so not his latest) on archive.org and read it. The style is weird, with a lot of “she says this, Google says AGI will be fine, some other guy says it won’t”, and I’m not 100% confident what Alexandre himself believes as far as the details are concerned. But it seems really obvious that his view is at least something like “AI will be super-duper powerful, the idea that perhaps we might not build it does not cross my mind, so we will have AGI eventually, then we’d better have it before the other guys, and make ourselves really smart through eugenics so we’re not left too far behind when the AI comes”. “Enter the Matrix to avoid being swallowed by it”, as he puts it (this is a quote). Judging by his tone, he seems to simply not consider that perhaps we could deliberately avoid building AGI, and to be unaware of most of the finer details of discussions about AI and safety (he also says that telling AI to obey us will result in the AI seeing us as colonizers and revolting against us, and so we should pre-emptively avoid such “anti-silicium racism”. Which is an oversimplification of, like, so many different things.), but some sentences are more like “humanity will have to determine the maximum speed of AI deployment [and it’ll be super hard/impossible because people will want to get the benefits of more IA]”. So, at least he’s aware of the problem. He doesn’t seem to have anything to say beyond that on AI safety issues, however. Oh, and he quotes (and possibly endorses?) the idea that “duh, AI can’t be smarter than us, we have multiple intelligences, Gardner said so”.
Overall, it’s much clearer to me why Lucie calls him an accelerationnist, and it seems like a good characterization.
Ah, interesting. His Guerre des intelligences does seem more obviously accelerationist, but his latest book gives slightly different vibes, so perhaps his views are changing.
But my sense is that he actually seems kind of typical of the polémiste tradition in French intellectual culture, where it’s more about arguing with flair and elegance than developing consistent arguments. So it might be difficult to find a consistent ideology behind his combination of accelerationism, a somewhat pessimistic transhumanism, and moderate AI fear.
Yes, he’s definitely a polemicist, and not a researcher or an expert. By training, he’s a urologist with an MBA or two, and most of what he writes definitely sounds very oversimplified/simplistic.
Well, I did the thing where I actually go find this guy’s main book (2017, so not his latest) on archive.org and read it. The style is weird, with a lot of “she says this, Google says AGI will be fine, some other guy says it won’t”, and I’m not 100% confident what Alexandre himself believes as far as the details are concerned.
But it seems really obvious that his view is at least something like “AI will be super-duper powerful, the idea that perhaps we might not build it does not cross my mind, so we will have AGI eventually, then we’d better have it before the other guys, and make ourselves really smart through eugenics so we’re not left too far behind when the AI comes”. “Enter the Matrix to avoid being swallowed by it”, as he puts it (this is a quote).
Judging by his tone, he seems to simply not consider that perhaps we could deliberately avoid building AGI, and to be unaware of most of the finer details of discussions about AI and safety (he also says that telling AI to obey us will result in the AI seeing us as colonizers and revolting against us, and so we should pre-emptively avoid such “anti-silicium racism”. Which is an oversimplification of, like, so many different things.), but some sentences are more like “humanity will have to determine the maximum speed of AI deployment [and it’ll be super hard/impossible because people will want to get the benefits of more IA]”. So, at least he’s aware of the problem. He doesn’t seem to have anything to say beyond that on AI safety issues, however.
Oh, and he quotes (and possibly endorses?) the idea that “duh, AI can’t be smarter than us, we have multiple intelligences, Gardner said so”.
Overall, it’s much clearer to me why Lucie calls him an accelerationnist, and it seems like a good characterization.
Ah, interesting. His Guerre des intelligences does seem more obviously accelerationist, but his latest book gives slightly different vibes, so perhaps his views are changing.
But my sense is that he actually seems kind of typical of the polémiste tradition in French intellectual culture, where it’s more about arguing with flair and elegance than developing consistent arguments. So it might be difficult to find a consistent ideology behind his combination of accelerationism, a somewhat pessimistic transhumanism, and moderate AI fear.
Yes, he’s definitely a polemicist, and not a researcher or an expert. By training, he’s a urologist with an MBA or two, and most of what he writes definitely sounds very oversimplified/simplistic.