A fire alarm approach won’t work because you would have people like Elon Musk and Mark Zuckerberg saying that we should be developing AI faster than we currently are. What I suggest should happen instead is that the EA community should try to convince a subset of people that AI risks are 80%+ of what we should care about, and if you donate to charity most of it should go to an AI risk organization, and if you have the capacity to directly contribute to reducing AI risk that is what you as a moral person should devote your life to.
I don’t think donating to other organizations is meaningful at this point unless those organizations have a way to spend a large amount of capital.
Both Musk and Zuckerberg are convinceable, they’re not insane, you just need to find the experts they’re anchoring on. Musk in particular definitely already believes the thesis.
Additional money would help as evidence by my son’s job search. My 17-year-old son is set to graduate college at age 18 from the University of Massachusetts at Amherst (where we live) majoring in computer science, concentrating in statistics and machine learning. He is looking for a summer internship. He would love to work in AI safety (and through me has known and been interested in the field since a very young age), and while he might end up getting a job in the area, he hasn’t yet. In a world where AI safety is well funded, every AI safety organization would be trying to hire him. In case any AI safety organizations are reading this, you can infer his intelligence from him having gotten 5s on the AP Calculus BC and AP Computer Science A exams in 7th grade. I have a PhD in economics from the University of Chicago and a JD from Stanford and my son is significantly more intelligence than I am.
I’ve heard the story told that Beth Barnes applied to intern at CHAI, but that they told her they didn’t have an internship program. She offered to create one and they actually accepted her offer.
I’m setting up AI Safety Australia and New Zealand to do AI safety movement-building (not technical research). We don’t properly exist yet (I’m still only on a planning grant), we don’t have a website and I don’t have funding for an internship program, but if someone were crazy enough to apply anyway, then I’d be happy that they reached out. They’d have to apply for funding so that I can pay them (with guidance).
I’m sure he can find access to better opportunities, but just thought I’d throw this out there anyway as there may be someone who is agenty, but can’t access the more prestigious internships.
In a world where AI safety is well funded, every AI safety organization would be trying to hire him.
Funding is not literally the only constraint; organizations can also have limited staff time to spread across hiring, onboarding, mentoring, and hopefully also doing the work the organization exists to do! Scaling up very quickly, or moderately far, also has a tendency to destroy the culture of organizations and induce communications problems at best or moral mazes at worst.
Unfortunately “just throw money at smart people to work independently” also requires a bunch of vetting, or the field collapses as an ocean of sincere incompetents and outright grifters drown out the people doing useful work.
That said, here are a couple of things for your son—or others in similar positions—to try:
Write up a proposed independent project, then email some funders about a summer project grant. Think “implement a small GPT or Efficient-Zero, apply it to a small domain like two-digit arithmetic, and investigate a restricted version of a real problem (in interpretability, generalization, prosaic alignment, etc)
You don’t need anyone’s permission to just do the project! Funding can make it easier to spend a lot of time on it, but doing much smaller projects in your free time is a great way to demonstrate that you’re fundable or hirable.
There is at least $10B that could straightforwardly be spent on AI safety. If these organizations are limited on money instead of logistical bandwidth, they should ping OpenPhil/FTX/other funders. Individuals’ best use of their time is probably on actual advocacy rather than donation.
you just need to find the experts they’re anchoring on.
I believe we are in the place we are in because Musk is listening and considering the arguments of experts. Contra Yudkowsky, there is no Correct Contrarian Cluster: while Yudkowsky and Bostrom make a bunch of good and convincing arguments about the dangers of AI and the alignment problem and even shorter timelines, I’ve always found any discussion of human values or psychology or even how coordination works to be one giant missing mood.
(Here’s a tangential but recent example: Yudkowsky wrote his Death with Dignity post. As far as I can tell, the real motivating point was “Please don’t do idiotic things like blowing up an Intel fab because you think it’s the consequentialist thing to do because you aren’t thinking about the second order consequences which will completely overwhelm any ‘good’ you might have achieved.” Instead, he used the Death with Dignity frame which didn’t actually land with people. Hell, my first read reaction was “this is all bullshit you defeatist idiot I am going down swinging” before I did a second read and tried to work a defensible point out of the text.)
My model of what happened was that Musk read Superintelligence, thought: this is true, this is true, this is true, this point is questionable, this point is total bullshit...how do I integrate all this together?
When you’re in takeoff, it doesn’t really matter whether people sprint to AGI or not, because either way we know lots of teams will eventually get there. We don’t have good reason to believe that the capability is likely to be restricted to one group, especially given that they don’t seem to be using any secret sauce. We also don’t seem at all likely to be within the sort-of alignment regime where a pivotal act isn’t basically guaranteed to kill everyone.
All the organizations that think about AGI know the situation, or will figure it out quickly, whether or not the EAs say something. If we do nothing, they will not go through the logic of “what reward do I give it” until right before they hit run. That is, unless you do the public advocacy first.
Have you thought about writing up a post suggesting this on the EA forum and then sharing it in various EA groups? One thing I’d be careful about though is writing “reducing AI risk that is what you as a moral person should devote your life to” as it could offend people committed to other cause areas. I’m sure you’d be able to find a way to word it better though.
A fire alarm approach won’t work because you would have people like Elon Musk and Mark Zuckerberg saying that we should be developing AI faster than we currently are. What I suggest should happen instead is that the EA community should try to convince a subset of people that AI risks are 80%+ of what we should care about, and if you donate to charity most of it should go to an AI risk organization, and if you have the capacity to directly contribute to reducing AI risk that is what you as a moral person should devote your life to.
I don’t think donating to other organizations is meaningful at this point unless those organizations have a way to spend a large amount of capital.
Both Musk and Zuckerberg are convinceable, they’re not insane, you just need to find the experts they’re anchoring on. Musk in particular definitely already believes the thesis.
Additional money would help as evidence by my son’s job search. My 17-year-old son is set to graduate college at age 18 from the University of Massachusetts at Amherst (where we live) majoring in computer science, concentrating in statistics and machine learning. He is looking for a summer internship. He would love to work in AI safety (and through me has known and been interested in the field since a very young age), and while he might end up getting a job in the area, he hasn’t yet. In a world where AI safety is well funded, every AI safety organization would be trying to hire him. In case any AI safety organizations are reading this, you can infer his intelligence from him having gotten 5s on the AP Calculus BC and AP Computer Science A exams in 7th grade. I have a PhD in economics from the University of Chicago and a JD from Stanford and my son is significantly more intelligence than I am.
Tell him to submit an application here, if he hasn’t already. These guys are competent and new.
I’ve heard the story told that Beth Barnes applied to intern at CHAI, but that they told her they didn’t have an internship program. She offered to create one and they actually accepted her offer.
I’m setting up AI Safety Australia and New Zealand to do AI safety movement-building (not technical research). We don’t properly exist yet (I’m still only on a planning grant), we don’t have a website and I don’t have funding for an internship program, but if someone were crazy enough to apply anyway, then I’d be happy that they reached out. They’d have to apply for funding so that I can pay them (with guidance).
I’m sure he can find access to better opportunities, but just thought I’d throw this out there anyway as there may be someone who is agenty, but can’t access the more prestigious internships.
Funding is not literally the only constraint; organizations can also have limited staff time to spread across hiring, onboarding, mentoring, and hopefully also doing the work the organization exists to do! Scaling up very quickly, or moderately far, also has a tendency to destroy the culture of organizations and induce communications problems at best or moral mazes at worst.
Unfortunately “just throw money at smart people to work independently” also requires a bunch of vetting, or the field collapses as an ocean of sincere incompetents and outright grifters drown out the people doing useful work.
That said, here are a couple of things for your son—or others in similar positions—to try:
https://www.redwoodresearch.org/jobs (or https://www.anthropic.com/#careers, though we don’t have internships)
Write up a proposed independent project, then email some funders about a summer project grant. Think “implement a small GPT or Efficient-Zero, apply it to a small domain like two-digit arithmetic, and investigate a restricted version of a real problem (in interpretability, generalization, prosaic alignment, etc)
You don’t need anyone’s permission to just do the project! Funding can make it easier to spend a lot of time on it, but doing much smaller projects in your free time is a great way to demonstrate that you’re fundable or hirable.
There is at least $10B that could straightforwardly be spent on AI safety. If these organizations are limited on money instead of logistical bandwidth, they should ping OpenPhil/FTX/other funders. Individuals’ best use of their time is probably on actual advocacy rather than donation.
I believe we are in the place we are in because Musk is listening and considering the arguments of experts. Contra Yudkowsky, there is no Correct Contrarian Cluster: while Yudkowsky and Bostrom make a bunch of good and convincing arguments about the dangers of AI and the alignment problem and even shorter timelines, I’ve always found any discussion of human values or psychology or even how coordination works to be one giant missing mood.
(Here’s a tangential but recent example: Yudkowsky wrote his Death with Dignity post. As far as I can tell, the real motivating point was “Please don’t do idiotic things like blowing up an Intel fab because you think it’s the consequentialist thing to do because you aren’t thinking about the second order consequences which will completely overwhelm any ‘good’ you might have achieved.” Instead, he used the Death with Dignity frame which didn’t actually land with people. Hell, my first read reaction was “this is all bullshit you defeatist idiot I am going down swinging” before I did a second read and tried to work a defensible point out of the text.)
My model of what happened was that Musk read Superintelligence, thought: this is true, this is true, this is true, this point is questionable, this point is total bullshit...how do I integrate all this together?
When you’re in takeoff, it doesn’t really matter whether people sprint to AGI or not, because either way we know lots of teams will eventually get there. We don’t have good reason to believe that the capability is likely to be restricted to one group, especially given that they don’t seem to be using any secret sauce. We also don’t seem at all likely to be within the sort-of alignment regime where a pivotal act isn’t basically guaranteed to kill everyone.
All the organizations that think about AGI know the situation, or will figure it out quickly, whether or not the EAs say something. If we do nothing, they will not go through the logic of “what reward do I give it” until right before they hit run. That is, unless you do the public advocacy first.
Have you thought about writing up a post suggesting this on the EA forum and then sharing it in various EA groups? One thing I’d be careful about though is writing “reducing AI risk that is what you as a moral person should devote your life to” as it could offend people committed to other cause areas. I’m sure you’d be able to find a way to word it better though.
Reflecting on this and other comments, I decided to edit the original post to retract the call for a “fire alarm”.