Yes, although you want to be very careful not to attract people to the field of AGI who don’t end up working on alignment but end up shortening the time to when we get super-human AGI.
Yeah, any ideas how to filter for this? Seems difficult not to have this effect on someone. One would hope the smarter people would get orthogonality, but like empirically that does not seem to be the case. The brightest people in AI have insane naïveté on the likely results of AGI.
I was going to suggest you try to reach EA people, but they might want to achieve AGI as quickly as possible since a friendly AGI would likely quickly improve the world. While the pool is very small, I have noticed a strong overlap between people worried about unfriendly AGI and people who have signed up for cryonics or who at least who think cryonics is a reasonable choice. It might be worth doing a survey of computer programmers who have thought about AGI to see which traits correlate with being worried about unaligned AGI.
From a selfish viewpoint, younger people should want AGI development to go slower than older people do since, cryonics aside, the older you are the more likely you will die before an AGI has the ability to cure aging.
Most EAs are much more worried about AGI being an x-risk than they are excited about AGI improving the world (if you look at the EA Forum, there is a lot of talk about the former and pretty much none about the latter). Also, no need to specifically try and reach EAs; pretty much everyone in the community is aware.
You might want to try recruiting from people from a more philosophical/mathematical background as opposed to recruiting from a programming background (hopefully we might be able to crack the problem from the pure logic perspective before we get to an application), but yeah now that you mention it “recruiting people to help the AGI issue without also worsening it” looks like it might be an underappreciated issue.
Yes, although you want to be very careful not to attract people to the field of AGI who don’t end up working on alignment but end up shortening the time to when we get super-human AGI.
Yeah, any ideas how to filter for this? Seems difficult not to have this effect on someone. One would hope the smarter people would get orthogonality, but like empirically that does not seem to be the case. The brightest people in AI have insane naïveté on the likely results of AGI.
I was going to suggest you try to reach EA people, but they might want to achieve AGI as quickly as possible since a friendly AGI would likely quickly improve the world. While the pool is very small, I have noticed a strong overlap between people worried about unfriendly AGI and people who have signed up for cryonics or who at least who think cryonics is a reasonable choice. It might be worth doing a survey of computer programmers who have thought about AGI to see which traits correlate with being worried about unaligned AGI.
From a selfish viewpoint, younger people should want AGI development to go slower than older people do since, cryonics aside, the older you are the more likely you will die before an AGI has the ability to cure aging.
Most EAs are much more worried about AGI being an x-risk than they are excited about AGI improving the world (if you look at the EA Forum, there is a lot of talk about the former and pretty much none about the latter). Also, no need to specifically try and reach EAs; pretty much everyone in the community is aware.
..Unless you meant Electronic Arts!? :)
You might want to try recruiting from people from a more philosophical/mathematical background as opposed to recruiting from a programming background (hopefully we might be able to crack the problem from the pure logic perspective before we get to an application), but yeah now that you mention it “recruiting people to help the AGI issue without also worsening it” looks like it might be an underappreciated issue.