I’d suggest talking to AI Safety Support, they offer free calls with people who want to work in the field. Rohin’s advice for alignment researchers is also worth looking at, it talks a fair amount about PhDs.
For that specific topic, maybe https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic is relevant?
I’d suggest talking to AI Safety Support, they offer free calls with people who want to work in the field. Rohin’s advice for alignment researchers is also worth looking at, it talks a fair amount about PhDs.
For that specific topic, maybe https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic is relevant?