I don’t think donating to other organizations is meaningful at this point unless those organizations have a way to spend a large amount of capital.
Both Musk and Zuckerberg are convinceable, they’re not insane, you just need to find the experts they’re anchoring on. Musk in particular definitely already believes the thesis.
Additional money would help as evidence by my son’s job search. My 17-year-old son is set to graduate college at age 18 from the University of Massachusetts at Amherst (where we live) majoring in computer science, concentrating in statistics and machine learning. He is looking for a summer internship. He would love to work in AI safety (and through me has known and been interested in the field since a very young age), and while he might end up getting a job in the area, he hasn’t yet. In a world where AI safety is well funded, every AI safety organization would be trying to hire him. In case any AI safety organizations are reading this, you can infer his intelligence from him having gotten 5s on the AP Calculus BC and AP Computer Science A exams in 7th grade. I have a PhD in economics from the University of Chicago and a JD from Stanford and my son is significantly more intelligence than I am.
I’ve heard the story told that Beth Barnes applied to intern at CHAI, but that they told her they didn’t have an internship program. She offered to create one and they actually accepted her offer.
I’m setting up AI Safety Australia and New Zealand to do AI safety movement-building (not technical research). We don’t properly exist yet (I’m still only on a planning grant), we don’t have a website and I don’t have funding for an internship program, but if someone were crazy enough to apply anyway, then I’d be happy that they reached out. They’d have to apply for funding so that I can pay them (with guidance).
I’m sure he can find access to better opportunities, but just thought I’d throw this out there anyway as there may be someone who is agenty, but can’t access the more prestigious internships.
In a world where AI safety is well funded, every AI safety organization would be trying to hire him.
Funding is not literally the only constraint; organizations can also have limited staff time to spread across hiring, onboarding, mentoring, and hopefully also doing the work the organization exists to do! Scaling up very quickly, or moderately far, also has a tendency to destroy the culture of organizations and induce communications problems at best or moral mazes at worst.
Unfortunately “just throw money at smart people to work independently” also requires a bunch of vetting, or the field collapses as an ocean of sincere incompetents and outright grifters drown out the people doing useful work.
That said, here are a couple of things for your son—or others in similar positions—to try:
Write up a proposed independent project, then email some funders about a summer project grant. Think “implement a small GPT or Efficient-Zero, apply it to a small domain like two-digit arithmetic, and investigate a restricted version of a real problem (in interpretability, generalization, prosaic alignment, etc)
You don’t need anyone’s permission to just do the project! Funding can make it easier to spend a lot of time on it, but doing much smaller projects in your free time is a great way to demonstrate that you’re fundable or hirable.
There is at least $10B that could straightforwardly be spent on AI safety. If these organizations are limited on money instead of logistical bandwidth, they should ping OpenPhil/FTX/other funders. Individuals’ best use of their time is probably on actual advocacy rather than donation.
you just need to find the experts they’re anchoring on.
I believe we are in the place we are in because Musk is listening and considering the arguments of experts. Contra Yudkowsky, there is no Correct Contrarian Cluster: while Yudkowsky and Bostrom make a bunch of good and convincing arguments about the dangers of AI and the alignment problem and even shorter timelines, I’ve always found any discussion of human values or psychology or even how coordination works to be one giant missing mood.
(Here’s a tangential but recent example: Yudkowsky wrote his Death with Dignity post. As far as I can tell, the real motivating point was “Please don’t do idiotic things like blowing up an Intel fab because you think it’s the consequentialist thing to do because you aren’t thinking about the second order consequences which will completely overwhelm any ‘good’ you might have achieved.” Instead, he used the Death with Dignity frame which didn’t actually land with people. Hell, my first read reaction was “this is all bullshit you defeatist idiot I am going down swinging” before I did a second read and tried to work a defensible point out of the text.)
My model of what happened was that Musk read Superintelligence, thought: this is true, this is true, this is true, this point is questionable, this point is total bullshit...how do I integrate all this together?
I don’t think donating to other organizations is meaningful at this point unless those organizations have a way to spend a large amount of capital.
Both Musk and Zuckerberg are convinceable, they’re not insane, you just need to find the experts they’re anchoring on. Musk in particular definitely already believes the thesis.
Additional money would help as evidence by my son’s job search. My 17-year-old son is set to graduate college at age 18 from the University of Massachusetts at Amherst (where we live) majoring in computer science, concentrating in statistics and machine learning. He is looking for a summer internship. He would love to work in AI safety (and through me has known and been interested in the field since a very young age), and while he might end up getting a job in the area, he hasn’t yet. In a world where AI safety is well funded, every AI safety organization would be trying to hire him. In case any AI safety organizations are reading this, you can infer his intelligence from him having gotten 5s on the AP Calculus BC and AP Computer Science A exams in 7th grade. I have a PhD in economics from the University of Chicago and a JD from Stanford and my son is significantly more intelligence than I am.
Tell him to submit an application here, if he hasn’t already. These guys are competent and new.
I’ve heard the story told that Beth Barnes applied to intern at CHAI, but that they told her they didn’t have an internship program. She offered to create one and they actually accepted her offer.
I’m setting up AI Safety Australia and New Zealand to do AI safety movement-building (not technical research). We don’t properly exist yet (I’m still only on a planning grant), we don’t have a website and I don’t have funding for an internship program, but if someone were crazy enough to apply anyway, then I’d be happy that they reached out. They’d have to apply for funding so that I can pay them (with guidance).
I’m sure he can find access to better opportunities, but just thought I’d throw this out there anyway as there may be someone who is agenty, but can’t access the more prestigious internships.
Funding is not literally the only constraint; organizations can also have limited staff time to spread across hiring, onboarding, mentoring, and hopefully also doing the work the organization exists to do! Scaling up very quickly, or moderately far, also has a tendency to destroy the culture of organizations and induce communications problems at best or moral mazes at worst.
Unfortunately “just throw money at smart people to work independently” also requires a bunch of vetting, or the field collapses as an ocean of sincere incompetents and outright grifters drown out the people doing useful work.
That said, here are a couple of things for your son—or others in similar positions—to try:
https://www.redwoodresearch.org/jobs (or https://www.anthropic.com/#careers, though we don’t have internships)
Write up a proposed independent project, then email some funders about a summer project grant. Think “implement a small GPT or Efficient-Zero, apply it to a small domain like two-digit arithmetic, and investigate a restricted version of a real problem (in interpretability, generalization, prosaic alignment, etc)
You don’t need anyone’s permission to just do the project! Funding can make it easier to spend a lot of time on it, but doing much smaller projects in your free time is a great way to demonstrate that you’re fundable or hirable.
There is at least $10B that could straightforwardly be spent on AI safety. If these organizations are limited on money instead of logistical bandwidth, they should ping OpenPhil/FTX/other funders. Individuals’ best use of their time is probably on actual advocacy rather than donation.
I believe we are in the place we are in because Musk is listening and considering the arguments of experts. Contra Yudkowsky, there is no Correct Contrarian Cluster: while Yudkowsky and Bostrom make a bunch of good and convincing arguments about the dangers of AI and the alignment problem and even shorter timelines, I’ve always found any discussion of human values or psychology or even how coordination works to be one giant missing mood.
(Here’s a tangential but recent example: Yudkowsky wrote his Death with Dignity post. As far as I can tell, the real motivating point was “Please don’t do idiotic things like blowing up an Intel fab because you think it’s the consequentialist thing to do because you aren’t thinking about the second order consequences which will completely overwhelm any ‘good’ you might have achieved.” Instead, he used the Death with Dignity frame which didn’t actually land with people. Hell, my first read reaction was “this is all bullshit you defeatist idiot I am going down swinging” before I did a second read and tried to work a defensible point out of the text.)
My model of what happened was that Musk read Superintelligence, thought: this is true, this is true, this is true, this point is questionable, this point is total bullshit...how do I integrate all this together?