Deep unlearning researcher—chrislakin.com—Therapy-resistant insecurities
Chris Lakin
Even just increasing the “minimum wage” of AI safety work could be great imo. If all additional donations did was double the incomes of people working on existing projects that seems positive. These donations go to real people in your network.
As someone who cares more than nil about finances, it was very difficult to justify working on AI safety when not at a frontier lab… so I stopped. (It’s also emotionally a bit hard to believe AI safety is so freakin important when it often doesn’t pay.) So I suspect greater donations help bring in more talent.
I can imagine some people are going to read this comment and think “But the really dedicatd people will work on AI safety at minimum wage!” Eh, I have expensive health issues and I intend to raise kids in San Francisco. Lots of the non-profit AI safety work pays <$120k. Seems like partner and I will need to make $350k+/yr.
On the object level, you could probably arrange some kind of donation swap with someone who wants to donate to 501(c)(3)s, right?
They donate $X to the non-501(c)(3)s you want and you donate $X from your DAF to the 501(c)(3)s they want.
Some vegans reading this may even discover they’re like me and do significantly better without eating many plants at all. I went most of my life without realizing that eating vegetables consistently made me feel bad. So now I just… don’t. I feel great and it prevents many health issues. I eat 1.5lb of pasture-raised beef (from a special farm) per day and 1.5 sticks of butter. Sometimes Ora King salmon or Seatopia contaminant-safe fish or scallops. Solves acne, improves inflammation, solves lethargy, prevents sleeping 1-2h longer when I eat vegetables
The veganism movement has singlehandedly reduced the IQs of many of the world’s smartest people
THANK YOU. I strongly upvoted on the EA forum, they need this post the most imo
Congratulations!
Rewriting The Courage to be Disliked
Mindreading is ubiquitous
Random: Maybe current lesswrong readers would like this post? It’s been 5 years, maybe there’s a way to re-post it
When did you last use it?
Do you find these examples relevant? Examples of self-fulfilling prophecies in AI alignment
Thanks for writing this!
I’ve only encountered a handful out of a few hundred teenagers and adults who really had a deep sense of what it means for emotions to “make sense.”
How would you make sense of the emotion of doubting the value of other emotions?
This is the point in the class where I ask participants to pick an emotion, any emotion, that they feel is bad, or wish they didn’t have, or think the world would be better off without, and spend 3 minutes trying to generate the reason it exists, and might be worth having after all.
For anyone who wants to check how they did on this, you can copy this whole post into AI (I recommend o3 or Claude 4 Opus with extended reasoning on) and ask it to be the author of the post without immediately giving answers
Thank you for writing this! I will be linking to this
What came of this?
Related: What does davidad want from «boundaries»?
(also the broader work on boundaries for formalizing safety/autonomy, also the deontic sufficiency hypothesis)
This is extremely useful for coaching too.
p.s.: I also wrote a similar post about how this applies to self-fulfilling prophecies chrislakin.blog/aim
oh, ty