Meet inside The Shops at Waterloo Town Square—we will congregate in the indoor seating area next to the Your Independent Grocer with the trees sticking out in the middle of the benches (pic) at 7:00 pm for 15 minutes, and then head over to my nearby apartment’s amenity room. If you’ve been around a few times, feel free to meet up at the front door of the apartment, which is near Allen Station, instead.
Topic
As a group broadly aligned with effective altruism and longtermist thinking, we’ve spent some time in previous meetups contemplating how our actions today might shape the long-term future of humanity and other sentient beings.
This week, we’re going to lean into our inner edgelords, and use Landian accelerationist ideas to kind of kick at the tires of this longtermism thing at a higher level of abstraction. I don’t intend this meetup to convert anyone to accelerationism or deboonk longtermism, but hey, whatever happens, happens B)
I remember when I first encountered longtermist ideas, and how unsettling and absurd they seemed at the time. It took me literal years to accept its logic and come around to them. I kind of feel an echo of that, reading this new article on accelerationism. It’s fine though, this probably won’t awaken anything in me aha.
How much do you agree with the Landian argument that the correlation between techno-economic development and human well-being is temporary and misleading (and that the system will “shed humanity like a snakeskin when it is no longer needed”) ?
How might this view of humans as “temporary workers in [capitalism’s] satanic mills” alter ideal approaches to existential risk mitigation and long-term planning?
Longtermism suggests that we have a responsibility to shape the long-term future, while accelerationism implies that technological progress has its own inevitable trajectory (a “will-to-think” that is orthogonal to human values at best), and EA/longtermist efforts are basically just elaborate LARPing while the techno-capital machine does its thing. Which side do you come down more on?
If you could have a beer with Nick Land’s capitalism-slash-ai-with-a-will-to-think, what would you ask it? Wrong answers only.
Longtermism/acc
Meet inside The Shops at Waterloo Town Square—we will congregate in the indoor seating area next to the Your Independent Grocer with the trees sticking out in the middle of the benches (pic) at 7:00 pm for 15 minutes, and then head over to my nearby apartment’s amenity room. If you’ve been around a few times, feel free to meet up at the front door of the apartment, which is near Allen Station, instead.
Topic
As a group broadly aligned with effective altruism and longtermist thinking, we’ve spent some time in previous meetups contemplating how our actions today might shape the long-term future of humanity and other sentient beings.
This week, we’re going to lean into our inner edgelords, and use Landian accelerationist ideas to kind of kick at the tires of this longtermism thing at a higher level of abstraction. I don’t intend this meetup to convert anyone to accelerationism or deboonk longtermism, but hey, whatever happens, happens B)
I remember when I first encountered longtermist ideas, and how unsettling and absurd they seemed at the time. It took me literal years to accept its logic and come around to them. I kind of feel an echo of that, reading this new article on accelerationism. It’s fine though, this probably won’t awaken anything in me aha.
Readings
Longtermism.com’s Introduction to Longtermism (Fin Moorhouse, 2021)
(wayback mirror for those too scared to visit unsecured websites (legit))
A Brief History of Accelerationism (Matt Southey, 2024)
Potential Discussion Questions
How much do you agree with the Landian argument that the correlation between techno-economic development and human well-being is temporary and misleading (and that the system will “shed humanity like a snakeskin when it is no longer needed”) ?
How might this view of humans as “temporary workers in [capitalism’s] satanic mills” alter ideal approaches to existential risk mitigation and long-term planning?
Longtermism suggests that we have a responsibility to shape the long-term future, while accelerationism implies that technological progress has its own inevitable trajectory (a “will-to-think” that is orthogonal to human values at best), and EA/longtermist efforts are basically just elaborate LARPing while the techno-capital machine does its thing. Which side do you come down more on?
If you could have a beer with Nick Land’s capitalism-slash-ai-with-a-will-to-think, what would you ask it? Wrong answers only.