Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: neo-artistotelian naturalist realism + virtue ethics.
Unvarnished critical (but constructive) feedback is welcome.
[Out-of-date-but-still-sorta-representative-of-my-thoughts hot takes below]
Thinks longtermism rests on a false premise – some sort of total impartiality.
Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet “luddite” so long as this is understood to describe someone who:
suspects that on net, technological progress yields diminishing returns in human flourishing.
OR believes workers have a right to organize to defend their interests (you know – what the original Luddites were doing). Fighting to uphold higher working standards is to be on the front lines fighting against Moloch (see e.g. Fleming’s vanishing economy dilemma and how decreased working hours offers a simple solution).
OR suspects that, with regards to AI, the Luddite fallacy may not be a fallacy: AI really could lead to wide-spread permanent technological unemployment, and that might not be a good thing.
OR considering the common-sensey thought that societies have a maxmimum rate of adaptation, suspects excessive rates of technological change can lead to harms, independent of how the technology is used. (This thought is more speculative/less researched – would love to hear evidence for or against).
Sorry, my first reply to your comment wasn’t very on point. Yes, you’re getting at one of the central claims of my post.
First, I wouldn’t say “mostly.” I think in excessive amounts it interferes. Regarding your skepticism: we already know that calculation (a maximizer’s mindset) in other contexts interferes with affective attachment and positive evaluations towards the choices made by said calculation (see references to psych litt). Why shouldn’t we expect the same thing to occur in moral situations, with the relevant “moral” affects? (In fact, depending on what you count as “moral,” the research already provides evidence of this).
If your skepticism is about the sheer possibility of calculation interfering with empathy/fellow-feeling etc, then any anecdotal evidence should do. See e.g. Mill’s autobiography. But also, you’ve never ever been in a situation where you were conflicted between doing two different things with two different people/groups, and too much back and forth made you kinda feel numb to both options in the end, just shrugging and saying “whatever, I don’t care anymore, either one”? That would be an example of calculation interfering with fellow-feeling.
Some amount of this is normal and unavoidable. But one can make it worse. Whether the LW/EA community does so or not is the question in need of data – we can agree on that! See my comment below for more details.