Reflecting on the transhumanist rebuttal to AI existential risk and critique of our debate methodologies and misuse of statistics

This morning, I read an article that counter-argues many of our arguments about AI existential risk, and I’m curious what other people’s thoughts are on the arguments, linked here: https://​​darkempressofthevoid.substack.com/​​p/​​fck-decels-accelerate. I outlined my thoughts below, which include both some criticism, though I’m unsure if I’m simply being too cynical, and I also refer to a topic from the article regarding our argument methodologies and tactics in which I agree with the author, providing an example that exemplifies the author’s argument that I found today.

From my perspective, the notion that AI might one day surpass human intelligence and spiral beyond our control is not just a speculative concern; it’s a critical issue that demands serious attention. While some may dismiss these worries as alarmist or reactionary, I see them as grounded in a rational assessment of the potential risks AI poses to humanity. The transhumanist vision of a harmonious co-evolution between humans and AI is certainly appealing in theory, but it strikes me as overly optimistic and potentially naive.

The idea that we can simply guide AI’s development from within, ensuring it remains aligned with human values, assumes a level of control that history suggests we may not possess. Technological ecosystems are complex, and once certain thresholds are crossed, they can develop dynamics that are difficult, if not impossible, to steer. The fear that AI might spiral beyond our control is not rooted in a misunderstanding of technology, but in a clear-eyed recognition of the unpredictability inherent in rapidly advancing systems. AI safety researchers advocate for cautious, well-reasoned approaches to managing these risks precisely because we understand that the stakes could not be higher.

While I appreciate the transhumanist enthusiasm for AI as a tool to transcend our biological limitations, I can’t help but see this vision as dangerously complacent. It assumes that technological progress is inherently beneficial and that the right alignment will naturally emerge from the integration of AI with our minds and bodies. But history is replete with examples of well-intentioned innovations that led to unforeseen consequences. To suggest that we can fully control AI’s trajectory seems to underestimate the complexities involved and overestimate our ability to foresee and manage every potential outcome.

Effective Altruism isn’t about retreating from progress; it’s about ensuring that progress leads to the greatest good for the greatest number, without unleashing catastrophic risks in the process. We advocate for rigorous research, robust safety measures, and thoughtful governance precisely because we believe in the potential of AI—but also because we recognize the profound dangers it poses if left unchecked. This isn’t about stifling innovation; it’s about guiding it responsibly to ensure a future where AI truly benefits humanity, rather than leading us into a perilous unknown.

In the end, the optimism of the transhumanist perspective must be tempered with the realism of Effective Altruism. The ascent of AI holds incredible promise, but it also carries significant risks that cannot be dismissed or downplayed. By focusing on these risks, we’re not rejecting the potential of AI—we’re safeguarding it, ensuring that it serves humanity in the long term rather than becoming a force that spirals beyond our control.

As a Rationalist and Effective Altruist, I believe in the power of reason and evidence to guide us toward the most effective ways to do good. However, I have to acknowledge in agreement with the article that there are significant flaws in how some within our community approach these goals. The criticism from the article is not without merit, particularly when it comes to the misuse of probability theory and the biases that can arise from our methodologies.

For example, Kat Wood’s polling of effective accelerationists and the suspected anti-human tendencies within certain factions (https://​​x.com/​​kat__woods/​​status/​​1806515723306742081). The poll has 253 responses of which only roughly 33 responses are from effective accelerationists (which holds very little statistical power), though this is self-reported so we cannot be certain that the responses are truly from effective accelerationists.

Furthermore, of those 253 responses to the poll and 33 supposed responses from effective accelerationists, only about nine of the responses identifying as e/​acc indicate they would be ok if AIs replaced humans. Only nine responses is well below statistically significant data and entirely lacks statistical power, yet Kat concludes from this that approximately 1 in 3 effective accelerationists think it would be OK if all humans died and AIs became the new dominant species in her post here (https://​​x.com/​​kat__woods/​​status/​​1825042764666667085).

These poor statistics and methodological practices undermines our overall arguments and portrays us as either being under-qualified or acting in bad faith. These types of arguments and methods are precisely what the article critiques us on, yet we continue to undermine our efforts with these practices. It doesn’t look good and it reflects poorly not only on our efforts but on our community as a whole as well.

While the intentions behind such research are undoubtedly good, the execution suffers from a clear sampling bias. By drawing conclusions from a narrow and ideologically aligned sample, we risk creating a distorted picture that doesn’t accurately reflect the broader population or even the true diversity within our own community. This is a critical error, and it does undermine the integrity of our conclusions.

This issue is symptomatic of a larger problem: our community does have the tendency to operate within a theoretical vacuum. We often focus so intensely on abstract utility-maximization that we lose sight of the real-world complexities that make these strategies difficult, if not impossible, to implement effectively. By neglecting proper methodological practices, we create models that might work in theory but fail to hold up under empirical scrutiny.

There’s a tendency to project our own subjective preferences onto our definitions of utility, assuming they are universally applicable. This not only weakens our arguments but also introduces a significant bias. When we present these preferences as if they represent the entire population, we strip our utility calculations of their statistical power, leading to conclusions that are both methodologically flawed and ideologically biased.

As a member of this community, I recognize that we must do better. We need to ground our arguments in robust empirical frameworks, not just theoretical ones. Our reliance on abstract models and narrow samples can lead to recommendations that are out of touch with the complexities of the real world. If we fail to address these issues, we risk undermining the very goals we seek to achieve.

In supporting this criticism, I believe it’s crucial for us to confront these methodological flaws head-on. The credibility of our movement depends on it. By refining our approaches, embracing empirical rigor, and being honest about the limitations of our models, we can ensure that our work genuinely contributes to the greater good, rather than being dismissed as ideologically driven or out of touch.

No comments.