Please help me find research on aspiring AI Safety folk!
I am two weeks into the strategy development phase of my movement building and almost ready to start ideating some programs for the year.
But I want these programs to be solving the biggest pain points people experience when trying to have a positive impact in AI Safety .
Has anyone seen any research that looks at this in depth? For example, through an interview process and then survey to quantify how painful the pain points are?
Some examples of pain points I’ve observed so far through my interviews with Technical folk:
I often felt overwhelmed by the vast amount of material to learn.
I felt there wasn’t a clear way to navigate learning the required information
I lacked an understanding of my strengths and weaknesses in relation to different AI Safety areas (i.e. personal fit / comparative advantage) .
I lacked an understanding of my progress after I get started (e.g. am I doing well? Poorly? Fast enough?)
I regularly experienced fear of failure
I regularly experienced fear of wasted efforts / sunk cost
Fear of admitting mistakes or starting over might prevent people from making necessary adjustments.
I found it difficult to identify my desired role / job (i.e. the end goal)
When I did think I knew my desired role, identifying the specific skills and knowledge required for a desired role was difficult
There is no clear career pipeline: Do X and then Y and then Z and then you have an A% chance of getting B% role
Finding time to get upskilled while working is difficult
I found the funding ecosystem opaque
A lot of discipline and motivation over potentially long periods was required to upskill
I felt like nobody gave me realistic expectations as to what the journey would be like
I’m not aware of research on this. LW has a poll feature that at least the mods can use (based on embedded emoji reacts, which could be used for pain levels), maybe ask the LW team to make one and a post for this
Please help me find research on aspiring AI Safety folk!
I am two weeks into the strategy development phase of my movement building and almost ready to start ideating some programs for the year.
But I want these programs to be solving the biggest pain points people experience when trying to have a positive impact in AI Safety .
Has anyone seen any research that looks at this in depth? For example, through an interview process and then survey to quantify how painful the pain points are?
Some examples of pain points I’ve observed so far through my interviews with Technical folk:
I often felt overwhelmed by the vast amount of material to learn.
I felt there wasn’t a clear way to navigate learning the required information
I lacked an understanding of my strengths and weaknesses in relation to different AI Safety areas (i.e. personal fit / comparative advantage) .
I lacked an understanding of my progress after I get started (e.g. am I doing well? Poorly? Fast enough?)
I regularly experienced fear of failure
I regularly experienced fear of wasted efforts / sunk cost
Fear of admitting mistakes or starting over might prevent people from making necessary adjustments.
I found it difficult to identify my desired role / job (i.e. the end goal)
When I did think I knew my desired role, identifying the specific skills and knowledge required for a desired role was difficult
There is no clear career pipeline: Do X and then Y and then Z and then you have an A% chance of getting B% role
Finding time to get upskilled while working is difficult
I found the funding ecosystem opaque
A lot of discipline and motivation over potentially long periods was required to upskill
I felt like nobody gave me realistic expectations as to what the journey would be like
I’m not aware of research on this. LW has a poll feature that at least the mods can use (based on embedded emoji reacts, which could be used for pain levels), maybe ask the LW team to make one and a post for this