EA is an intensional movement.
http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/
I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:
My PHD will likely be a book on altruism, and any respectable altruist these days is worried about AI at least 30% of his waking life.
That’s how I see it anyway. Most of the arguments for it are in “Superintelligence” if you disagree with that, then you probably do disagree with me.
Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I’ve merely read all of FHI, most of MIRI, half of AIMA, Paul’s blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don’t code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system’s cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.
The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don’t have a single denominator, ofter superimposed in a robust way.
But for now, nobody who is publishing seems to know for sure.