The AI safety field keeps having people show up who say “I want to help”, and then it turns out not to be that easy to help, so those people sort of shrug and go back to their lives.
I think this can be nearly completely solved by using a method detailed in Decisive—expectation-setting. I remember that employers found that warning potential employees of the difficulty and frustration involved with the job, retention skyrocketed. People (mostly) weren’t being discouraged from the process, but having their expectations set properly actually made them not mind the experience.
One more thought -
I think this can be nearly completely solved by using a method detailed in Decisive—expectation-setting. I remember that employers found that warning potential employees of the difficulty and frustration involved with the job, retention skyrocketed. People (mostly) weren’t being discouraged from the process, but having their expectations set properly actually made them not mind the experience.