High charisma/extroversion seems useful for movement building. Do you have any experience in programming or AI?
Do you want to give it a go? Let’s suppose you were organising a conference on AI safety. Can you name 5 or 6 ways that the conference could end up being net-negative?
Programming yes, and I’d say I’m a skilled amateur, though I need to just do more programming. AI experience, not so much, other than reading (a large amount of) LW.
>Let’s suppose you were organising a conference on AI safety. Can you name 5 or 6 ways that the conference could end up being net-negative?
The conference involves someone talking about an extremely taboo topic (eugenics, say) as part of their plan to save the world from AI; the conference is covered in major news outlets as “AI Safety has an X problem” or something along those lines, and leading AI researchers are distracted from their work by the ensuing twitter storm.
One of the main speakers at the event is very good at diverting money towards him/herself through raw charisma and ends up diverting money for projects/compute away from other, more promising projects; later it turns out that their project actually accelerated the development of an unaligned AI.
The conference on AI safety doesn’t involve the people actually trying to build an AGI, and only involves the people who are already committed to and educated about AI alignment. The organizers and conference attendees are reassured by the consensus of “alignment is the most pressing problem we’re facing, and we need to take any steps necessary that don’t hurt us in the long run to fix it,” while that attitude isn’t representative of the audience the organizers actually want to reach. The organizers make future decisions based on the information that “lead AI researchers already are concerned about alignment to the degree we want them to be”, which ends up being wrong and they should have been more focused on reaching lead AI researchers.
The conference is just a waste of time, and the attendees could have been doing better things with the time/resources spent attending.
There’s a bus crash on the way to the event, and several key researchers die, setting back progress by years.
Similar to #2, the conference convinces researchers that [any of the wrong ways to approach “death with dignity” mentioned in this post] is the best way to try to solve x-risk from AGI, and resources are put towards plans that, if they fail, will fail catastrophically
“If we manage to create an AI smarter than us, won’t it be more moral?” or any AGI-related fallacy disproved in the Sequences is spouted as common wisdom, and people are convinced.
Cool, so I’d suggest looking into movement-building (obviously take with a grain of salt given how little we’ve talked). It’s probably good to try to develop some AI knowledge as well so that people will take you more seriously, but it’s not like you’d need that before you start.
You did pretty well in terms of generating ways it could be net-negative. That’s makes me more confident that you would be able to have a net-positive impact.
I guess it’d also be nice to have some degree of organisational skills, but honestly, if there isn’t anyone else doing AI safety movement-building in your area all you have to be is not completely terrible so long as you are aware of your limits and avoid organising anything that would go beyond them.
High charisma/extroversion seems useful for movement building. Do you have any experience in programming or AI?
Do you want to give it a go? Let’s suppose you were organising a conference on AI safety. Can you name 5 or 6 ways that the conference could end up being net-negative?
>Do you have any experience in programming or AI?
Programming yes, and I’d say I’m a skilled amateur, though I need to just do more programming. AI experience, not so much, other than reading (a large amount of) LW.
>Let’s suppose you were organising a conference on AI safety. Can you name 5 or 6 ways that the conference could end up being net-negative?
The conference involves someone talking about an extremely taboo topic (eugenics, say) as part of their plan to save the world from AI; the conference is covered in major news outlets as “AI Safety has an X problem” or something along those lines, and leading AI researchers are distracted from their work by the ensuing twitter storm.
One of the main speakers at the event is very good at diverting money towards him/herself through raw charisma and ends up diverting money for projects/compute away from other, more promising projects; later it turns out that their project actually accelerated the development of an unaligned AI.
The conference on AI safety doesn’t involve the people actually trying to build an AGI, and only involves the people who are already committed to and educated about AI alignment. The organizers and conference attendees are reassured by the consensus of “alignment is the most pressing problem we’re facing, and we need to take any steps necessary that don’t hurt us in the long run to fix it,” while that attitude isn’t representative of the audience the organizers actually want to reach. The organizers make future decisions based on the information that “lead AI researchers already are concerned about alignment to the degree we want them to be”, which ends up being wrong and they should have been more focused on reaching lead AI researchers.
The conference is just a waste of time, and the attendees could have been doing better things with the time/resources spent attending.
There’s a bus crash on the way to the event, and several key researchers die, setting back progress by years.
Similar to #2, the conference convinces researchers that [any of the wrong ways to approach “death with dignity” mentioned in this post] is the best way to try to solve x-risk from AGI, and resources are put towards plans that, if they fail, will fail catastrophically
“If we manage to create an AI smarter than us, won’t it be more moral?” or any AGI-related fallacy disproved in the Sequences is spouted as common wisdom, and people are convinced.
Cool, so I’d suggest looking into movement-building (obviously take with a grain of salt given how little we’ve talked). It’s probably good to try to develop some AI knowledge as well so that people will take you more seriously, but it’s not like you’d need that before you start.
You did pretty well in terms of generating ways it could be net-negative. That’s makes me more confident that you would be able to have a net-positive impact.
I guess it’d also be nice to have some degree of organisational skills, but honestly, if there isn’t anyone else doing AI safety movement-building in your area all you have to be is not completely terrible so long as you are aware of your limits and avoid organising anything that would go beyond them.