That’s right. I put him in the basket of “the field’s most visible and effective activists”. He’s done more than, literally, 99.9999999% of the human population. That’s why it’s so frustrating that his activism has mostly been done in pursuit of the instrumental goal of recruiting geniuses for MIRI for better higher quality maths research, not trying to convince the broader public. The sequences were fantastic, probably way better than I could have done if I were trying to drum up public support directly.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk. It’s simply not an attempt to be memetic; something like 1700 pages of content, where the reproducible core idea is probably ten. On top of that, he deliberately filters people with incorrect beliefs about religion, politics, etc., who might otherwise agree with him and be willing to support the narrow cause he champions.
As far as I can tell this was all by design. Eliezer did not, and perhaps still does not, think it’s a worthwhile strategy to get general public & academic support for AGI risk management. He will do a presentation or two at a prestigious university every so often, but doesn’t spend most of his time reaching people that aren’t fields medalists. When he does reach one of those he sends them to work in his research house instead of trying to get them to do something as effective as writing the sequences was. I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
Note: This is a Dumbledore Critique. It’s not Eliezer’s God-given duty to save the world, and 99.99% of the planets’ intelligentsia gets a much stronger ding for doing nothing at all, or worse.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk.
I don’t think anything like that exists. It’s been tried in hopeless floundering attempts, by people too ignorant to know how hopeless it is; nobody competent has tried it because anybody competent sees that they don’t know how to do that.
I’m not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn’t doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn’t have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don’t have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the “convergent instrumental goals” thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There’s a difference between “the arguments and details surrounding AGI risk sufficient to mobilize” and “the fully elaborated causal chain”.
Obviously “go talk to everyone until they agree” isn’t a scalable solution, and I don’t have a definitive one or else I’d go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I’d question someone who seems right about almost everything.
(It’s possible you’re talking about “understanding the problem in enough detail to solve it technically” and I’m talking about “doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure”, in which case that’s that.)
I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
That sounds obviously amazing. Are you under the impression that recruitment succeeded so enormously that there are 10 people that can produce intellectual content as relevant and compelling as the original sequences, but that they’ve been working at MIRI (or something) instead? Who are you thinking of?
I don’t think we got even a single Eliezer-substitute, even though that was one of the key goals of the writing the sequences.
That’s right. I put him in the basket of “the field’s most visible and effective activists”. He’s done more than, literally, 99.9999999% of the human population. That’s why it’s so frustrating that his activism has mostly been done in pursuit of the instrumental goal of recruiting geniuses for MIRI for better higher quality maths research, not trying to convince the broader public. The sequences were fantastic, probably way better than I could have done if I were trying to drum up public support directly.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk. It’s simply not an attempt to be memetic; something like 1700 pages of content, where the reproducible core idea is probably ten. On top of that, he deliberately filters people with incorrect beliefs about religion, politics, etc., who might otherwise agree with him and be willing to support the narrow cause he champions.
As far as I can tell this was all by design. Eliezer did not, and perhaps still does not, think it’s a worthwhile strategy to get general public & academic support for AGI risk management. He will do a presentation or two at a prestigious university every so often, but doesn’t spend most of his time reaching people that aren’t fields medalists. When he does reach one of those he sends them to work in his research house instead of trying to get them to do something as effective as writing the sequences was. I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
Note: This is a Dumbledore Critique. It’s not Eliezer’s God-given duty to save the world, and 99.99% of the planets’ intelligentsia gets a much stronger ding for doing nothing at all, or worse.
I don’t think anything like that exists. It’s been tried in hopeless floundering attempts, by people too ignorant to know how hopeless it is; nobody competent has tried it because anybody competent sees that they don’t know how to do that.
I’m not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn’t doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn’t have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don’t have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the “convergent instrumental goals” thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There’s a difference between “the arguments and details surrounding AGI risk sufficient to mobilize” and “the fully elaborated causal chain”.
Obviously “go talk to everyone until they agree” isn’t a scalable solution, and I don’t have a definitive one or else I’d go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I’d question someone who seems right about almost everything.
(It’s possible you’re talking about “understanding the problem in enough detail to solve it technically” and I’m talking about “doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure”, in which case that’s that.)
That sounds obviously amazing. Are you under the impression that recruitment succeeded so enormously that there are 10 people that can produce intellectual content as relevant and compelling as the original sequences, but that they’ve been working at MIRI (or something) instead? Who are you thinking of?
I don’t think we got even a single Eliezer-substitute, even though that was one of the key goals of the writing the sequences.