It seems to be a pattern across most of society that young people are generally optimistic about the degree to which large institutions/society can be steered, and older people who’ve tried to do that steering are mostly much less optimistic about it. Kids come out of high school/college with grand dreams of a great social movement which will spur sweeping legislative change on X (climate change, animal rights, poverty, whatever). Unless they happen to pick whichever X is actually the next hot thing (gay rights/feminism/anti-racism in the past 15 years), those dreams eventually get scaled back to something much smaller, and also get largely replaced by cynicism about being able to do anything at all.
Remember all of those nonprofits the older generation dedicated to AI safety-related activism; places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math? All of those hundreds of millions of dollars of funding that went to guys like Rob Miles and not research houses? No? I really want to remember, but I can’t.
Seriously, is this a joke? This comment feels like it was written about a completely different timeline. The situation on the ground for the last ten years has been one where the field’s most visible and effective activists have full-time jobs doing math and ML research surrounding the alignment problem, existential risk in general, or even a completely unrelated research position at a random university. We have practically poured 90% of all of our money and labor into MIRI and MIRI clones instead of raising the alarm. When people here do propose raising the alarm, the reaction they get is uniformly “but the something something contra-agentic process” or “activism? are you some kind of terrorist?!”
Even now, after speaking to maybe a dozen people referred to me after my pessimism post, I have not found one person who does activism work full time. I know a lot of people who do academic research on what activists might do if they existed, but as far as I can tell no one is actually doing the hard work of optimizing their leaflets. The closest I’ve found are Vael Gates and Rob Miles, people who instead have jobs doing other stuff, because despite all of the endless bitching about how there’s no serious plans, no one has ever decided to either pay these guys for, or organize, the work they do inbetween their regular jobs.
A hundred people individually giving presentations to their university or nonprofit heads and then seething when they’re not taken seriously is not a serious attempt, and you’ll forgive me for not just rolling over and dying.
Update ~20 minutes after posting: Took a closer look; it appears Rob Miles might be getting enough from his patreon to survive, but it’s unclear. It’s weird to me that he doesn’t produce more content if he’s doing this full time.
places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math?
What? Eliezer took 2 years to write the sequences, during which he did very little direct alignment work. And in the years before that, SingInst was mostly an advocacy org, running the Singularity Summits.
That’s right. I put him in the basket of “the field’s most visible and effective activists”. He’s done more than, literally, 99.9999999% of the human population. That’s why it’s so frustrating that his activism has mostly been done in pursuit of the instrumental goal of recruiting geniuses for MIRI for better higher quality maths research, not trying to convince the broader public. The sequences were fantastic, probably way better than I could have done if I were trying to drum up public support directly.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk. It’s simply not an attempt to be memetic; something like 1700 pages of content, where the reproducible core idea is probably ten. On top of that, he deliberately filters people with incorrect beliefs about religion, politics, etc., who might otherwise agree with him and be willing to support the narrow cause he champions.
As far as I can tell this was all by design. Eliezer did not, and perhaps still does not, think it’s a worthwhile strategy to get general public & academic support for AGI risk management. He will do a presentation or two at a prestigious university every so often, but doesn’t spend most of his time reaching people that aren’t fields medalists. When he does reach one of those he sends them to work in his research house instead of trying to get them to do something as effective as writing the sequences was. I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
Note: This is a Dumbledore Critique. It’s not Eliezer’s God-given duty to save the world, and 99.99% of the planets’ intelligentsia gets a much stronger ding for doing nothing at all, or worse.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk.
I don’t think anything like that exists. It’s been tried in hopeless floundering attempts, by people too ignorant to know how hopeless it is; nobody competent has tried it because anybody competent sees that they don’t know how to do that.
I’m not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn’t doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn’t have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don’t have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the “convergent instrumental goals” thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There’s a difference between “the arguments and details surrounding AGI risk sufficient to mobilize” and “the fully elaborated causal chain”.
Obviously “go talk to everyone until they agree” isn’t a scalable solution, and I don’t have a definitive one or else I’d go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I’d question someone who seems right about almost everything.
(It’s possible you’re talking about “understanding the problem in enough detail to solve it technically” and I’m talking about “doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure”, in which case that’s that.)
I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
That sounds obviously amazing. Are you under the impression that recruitment succeeded so enormously that there are 10 people that can produce intellectual content as relevant and compelling as the original sequences, but that they’ve been working at MIRI (or something) instead? Who are you thinking of?
I don’t think we got even a single Eliezer-substitute, even though that was one of the key goals of the writing the sequences.
Remember all of those nonprofits the older generation dedicated to AI safety-related activism; places where people like Eliezer spent their days trying to convince people their concerns are correct instead of doing math? All of those hundreds of millions of dollars of funding that went to guys like Rob Miles and not research houses? No? I really want to remember, but I can’t.
Seriously, is this a joke? This comment feels like it was written about a completely different timeline. The situation on the ground for the last ten years has been one where the field’s most visible and effective activists have full-time jobs doing math and ML research surrounding the alignment problem, existential risk in general, or even a completely unrelated research position at a random university. We have practically poured 90% of all of our money and labor into MIRI and MIRI clones instead of raising the alarm. When people here do propose raising the alarm, the reaction they get is uniformly “but the something something contra-agentic process” or “activism? are you some kind of terrorist?!”
Even now, after speaking to maybe a dozen people referred to me after my pessimism post, I have not found one person who does activism work full time. I know a lot of people who do academic research on what activists might do if they existed, but as far as I can tell no one is actually doing the hard work of optimizing their leaflets. The closest I’ve found are Vael Gates and Rob Miles, people who instead have jobs doing other stuff, because despite all of the endless bitching about how there’s no serious plans, no one has ever decided to either pay these guys for, or organize, the work they do inbetween their regular jobs.
A hundred people individually giving presentations to their university or nonprofit heads and then seething when they’re not taken seriously is not a serious attempt, and you’ll forgive me for not just rolling over and dying.
Update ~20 minutes after posting: Took a closer look; it appears Rob Miles might be getting enough from his patreon to survive, but it’s unclear. It’s weird to me that he doesn’t produce more content if he’s doing this full time.
What? Eliezer took 2 years to write the sequences, during which he did very little direct alignment work. And in the years before that, SingInst was mostly an advocacy org, running the Singularity Summits.
That’s right. I put him in the basket of “the field’s most visible and effective activists”. He’s done more than, literally, 99.9999999% of the human population. That’s why it’s so frustrating that his activism has mostly been done in pursuit of the instrumental goal of recruiting geniuses for MIRI for better higher quality maths research, not trying to convince the broader public. The sequences were fantastic, probably way better than I could have done if I were trying to drum up public support directly.
But they were never a scalable solution to that problem. People who read the sequences don’t have a bite sized way of transferring that information to others. They can go up to their friends and say “read the sequences”, but they’re left with no more compact lossy way to transmit the pertinent details about AGI risk. It’s simply not an attempt to be memetic; something like 1700 pages of content, where the reproducible core idea is probably ten. On top of that, he deliberately filters people with incorrect beliefs about religion, politics, etc., who might otherwise agree with him and be willing to support the narrow cause he champions.
As far as I can tell this was all by design. Eliezer did not, and perhaps still does not, think it’s a worthwhile strategy to get general public & academic support for AGI risk management. He will do a presentation or two at a prestigious university every so often, but doesn’t spend most of his time reaching people that aren’t fields medalists. When he does reach one of those he sends them to work in his research house instead of trying to get them to do something as effective as writing the sequences was. I shudder to imagine the future we might have had if 10 full-Eliezers and 50 semi-Eliezers had been working on that problem full time for the last fifteen years.
Note: This is a Dumbledore Critique. It’s not Eliezer’s God-given duty to save the world, and 99.99% of the planets’ intelligentsia gets a much stronger ding for doing nothing at all, or worse.
I don’t think anything like that exists. It’s been tried in hopeless floundering attempts, by people too ignorant to know how hopeless it is; nobody competent has tried it because anybody competent sees that they don’t know how to do that.
I’m not particularly good at persuading people of things. So, consider the fact that I convinced my father, who is a nontechnical, religious 60yo man, that AGI was a serious problem within a 45-60 minute conversation. To the point that he was actually, unironically concerned that I wasn’t doing more to help. I get this reaction or something equivalent regularly, from people of very different backgrounds, of different intelligence levels, ethnicities, political tribes. Not all of them go off and devote their lives to alignment like you did, but they at least buy into an intellectual position and now have a stake in the ground. With pleasant reminders, they often try to talk to their friends.
I didn’t have to get him to change the gears in his mind that led him to Christianity in order for me to get him to agree with me about AI, just like environmentalists don’t have to turn democrats into expert scientists to get them to understand global warming. For as many people for whom the “convergent instrumental goals” thing has to be formalized, there are either people who just get it, defer their understanding to others, or are able to use some completely different model that drags them to the same conclusions. There’s a difference between “the arguments and details surrounding AGI risk sufficient to mobilize” and “the fully elaborated causal chain”.
Obviously “go talk to everyone until they agree” isn’t a scalable solution, and I don’t have a definitive one or else I’d go do it. Perhaps arbital was partly an attempt to accelerate this process? But you can see why it seems highly counterintuitive to me that it would be literally impossible to bring people onto the AGI risk train without giving them a textbook of epistemology first, to the point that I’d question someone who seems right about almost everything.
(It’s possible you’re talking about “understanding the problem in enough detail to solve it technically” and I’m talking about “doing whatever reasoning they need to to arrive at the correct conclusion and maybe help us talk to DeepMind/FAIR researchers who would be better equipped to solve it technically if they had more peer pressure”, in which case that’s that.)
That sounds obviously amazing. Are you under the impression that recruitment succeeded so enormously that there are 10 people that can produce intellectual content as relevant and compelling as the original sequences, but that they’ve been working at MIRI (or something) instead? Who are you thinking of?
I don’t think we got even a single Eliezer-substitute, even though that was one of the key goals of the writing the sequences.
Rob Miles is funded by the Long Term Future Fund at a roughly full-time salary: https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants#Robert_Miles___60_000_
That’s genuinely good news to me. However, he’s only made two videos in the past year? I’m not being accusatory, just confused.
He has also been helping a bunch of other people with video content creation. For example: https://www.youtube.com/c/RationalAnimations
Gotcha. That’s good to hear.