I don’t know if you should switch offhand, but I notice this post of yours is pretty old and a lot has changed.
What’s your rough assessment of AI risk now?
If you haven’t thought about it explicitly, I think it’s probably worth spending a week thinking about.
Also, how many people work on your current project? If you left, would that tank the project or be pretty replaceable? (Or: if you left are there other people who might then be more likely to leave?).
I think it’s pretty important and I’m glad a bunch of people are working on it. I seriously considered switching into it in spring 2022 before deciding to go into biorisk
Also, how many people work on your current project? If you left, would that tank the project or be pretty replaceable?
We’re pretty small (~7), and I’ve recently started leading our near-term first team (four people counting me, trying to hire two more). I think I’m not very replaceable: my strengths are very different from others on the team in a highly complementary way, especially from a “let’s get a monitoring system up and running now” perspective.
(I must admit to some snark in my short response to Mako above. I’m mildly grumpy about people going around as if alignment is literally the only thing that matters. But that’s also not really what he was saying, since he was pushing back against my worrying about dragons and not my day job.)
I don’t know if you should switch offhand, but I notice this post of yours is pretty old and a lot has changed.
What’s your rough assessment of AI risk now?
If you haven’t thought about it explicitly, I think it’s probably worth spending a week thinking about.
Also, how many people work on your current project? If you left, would that tank the project or be pretty replaceable? (Or: if you left are there other people who might then be more likely to leave?).
I think it’s pretty important and I’m glad a bunch of people are working on it. I seriously considered switching into it in spring 2022 before deciding to go into biorisk
We’re pretty small (~7), and I’ve recently started leading our near-term first team (four people counting me, trying to hire two more). I think I’m not very replaceable: my strengths are very different from others on the team in a highly complementary way, especially from a “let’s get a monitoring system up and running now” perspective.
(I must admit to some snark in my short response to Mako above. I’m mildly grumpy about people going around as if alignment is literally the only thing that matters. But that’s also not really what he was saying, since he was pushing back against my worrying about dragons and not my day job.)
Nod. I had initially remembered the Superintelligence Risk Project being more recent than 2017, was there a 2022 writeup of your decisionmaking?