What are some smaller-but-concrete challenges related to AI safety that are impacting people today?
Making a list of smaller but non-abstract challenges related to large topics like AI safety, that someone could work on to get practical on the ground experience.
Some examples:
Help media companies better communicate advances in AI (“Google’s Sentient AI”)
Help people detect deepfakes
Help online communities deal with deepfakes and set good policies
Help people detect online AI astroturfing
What else?
Perhaps I’m missing something (I don’t work in AI research), but isn’t the obvious first stop Christiano et al’s Concrete Problems in AI Safety? Apologies if you already know about this paper and meant something else.
I think Eliezer would object to calling anything like that “related to AI safety”, because it might imply that working on those is relevant to THE AI Safety, which, he is convinced, has no hope of success at this point, and anything weaker just give the false sense of security “but we/people are working on it!”. See also his (rather dated and more optimistic) Rocket Alignment post.
There is a giant need for small hands on pratical projects + issues- I’ve started a “Top Down” list based of CHTs “Three Rules”
My plan being staring with Top Down -finding the small issues that can be practically + securely be built bottom up...including surrounding policy- then sorting based on small or smaller ficus in relation to a larger project /need/ability to complete/etc. ↑ ↑ ↑↓↓↓ If anyone has ideas on how to best organize these types of projects please reply!
Once my notes are organized I will add them here.
Reference: https://www.humanetech.com/podcast/the-three-rules-of-humane-tech
Here are the three rules that Tristan and Aza propose:
RULE 1: When we invent a new technology, we uncover a new class of responsibility. We didn’t need the right to be forgotten until computers could remember us forever, and we didn’t need the right to privacy in our laws until cameras were mass-produced. As we move into an age where technology could destroy the world so much faster than our responsibilities could catch up, it’s no longer okay to say it’s someone else’s job to define what responsibility means.
RULE 2: If that new technology confers power, it will start a race. Humane technologists are aware of the arms races their creations could set off before those creations run away from them – and they notice and think about the ways their new work could confer power.
RULE 3: If we don’t coordinate, the race will end in tragedy. No one company or actor can solve these systemic problems alone. When it comes to AI, developers wrongly believe it would be impossible to sit down with cohorts at different companies to work on hammering out how to move at the pace of getting this right – for all our sakes.