The focus should be on getting extremely bright young computer programmers interested in the AI alignment problem so you should target podcasts they listen to. Someone should also try to reach members of the Davidson Institute which is an organization for profoundly gifted children.
I’m a Davidson YS and have access to the general email list. Is there a somewhat standard intro to EA that I could modify and post there without seeming like I’m proselytizing?
Yes, although you want to be very careful not to attract people to the field of AGI who don’t end up working on alignment but end up shortening the time to when we get super-human AGI.
Yeah, any ideas how to filter for this? Seems difficult not to have this effect on someone. One would hope the smarter people would get orthogonality, but like empirically that does not seem to be the case. The brightest people in AI have insane naïveté on the likely results of AGI.
I was going to suggest you try to reach EA people, but they might want to achieve AGI as quickly as possible since a friendly AGI would likely quickly improve the world. While the pool is very small, I have noticed a strong overlap between people worried about unfriendly AGI and people who have signed up for cryonics or who at least who think cryonics is a reasonable choice. It might be worth doing a survey of computer programmers who have thought about AGI to see which traits correlate with being worried about unaligned AGI.
From a selfish viewpoint, younger people should want AGI development to go slower than older people do since, cryonics aside, the older you are the more likely you will die before an AGI has the ability to cure aging.
Most EAs are much more worried about AGI being an x-risk than they are excited about AGI improving the world (if you look at the EA Forum, there is a lot of talk about the former and pretty much none about the latter). Also, no need to specifically try and reach EAs; pretty much everyone in the community is aware.
You might want to try recruiting from people from a more philosophical/mathematical background as opposed to recruiting from a programming background (hopefully we might be able to crack the problem from the pure logic perspective before we get to an application), but yeah now that you mention it “recruiting people to help the AGI issue without also worsening it” looks like it might be an underappreciated issue.
The focus should be on getting extremely bright young computer programmers interested in the AI alignment problem so you should target podcasts they listen to. Someone should also try to reach members of the Davidson Institute which is an organization for profoundly gifted children.
At last! A justification for why Redwood and OA etc should fund my anime AI work!
I’m a Davidson YS and have access to the general email list. Is there a somewhat standard intro to EA that I could modify and post there without seeming like I’m proselytizing?
This: https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/
Or those who might choose to become programmers.
Yes, although you want to be very careful not to attract people to the field of AGI who don’t end up working on alignment but end up shortening the time to when we get super-human AGI.
Yeah, any ideas how to filter for this? Seems difficult not to have this effect on someone. One would hope the smarter people would get orthogonality, but like empirically that does not seem to be the case. The brightest people in AI have insane naïveté on the likely results of AGI.
I was going to suggest you try to reach EA people, but they might want to achieve AGI as quickly as possible since a friendly AGI would likely quickly improve the world. While the pool is very small, I have noticed a strong overlap between people worried about unfriendly AGI and people who have signed up for cryonics or who at least who think cryonics is a reasonable choice. It might be worth doing a survey of computer programmers who have thought about AGI to see which traits correlate with being worried about unaligned AGI.
From a selfish viewpoint, younger people should want AGI development to go slower than older people do since, cryonics aside, the older you are the more likely you will die before an AGI has the ability to cure aging.
Most EAs are much more worried about AGI being an x-risk than they are excited about AGI improving the world (if you look at the EA Forum, there is a lot of talk about the former and pretty much none about the latter). Also, no need to specifically try and reach EAs; pretty much everyone in the community is aware.
..Unless you meant Electronic Arts!? :)
You might want to try recruiting from people from a more philosophical/mathematical background as opposed to recruiting from a programming background (hopefully we might be able to crack the problem from the pure logic perspective before we get to an application), but yeah now that you mention it “recruiting people to help the AGI issue without also worsening it” looks like it might be an underappreciated issue.