I’m not talking about guidelines for the wider AI community. I’m talking about guidelines for my own research (and presumably other alignment researchers would be interested in the same). The wider AI community doesn’t share my assumptions about AI risk. In particular, I believe that most of what they’re doing is actively harmful. Therefore, I don’t expect them to accept these guidelines, and I’m also mostly uninterested in their input. Moreover, it’s not the broader public that worries me, but precisely the broader AI community. It is from them that I want to withhold things.
Creating any sort of guidelines that the wider community would also accept is a different sort of challenge altogether. It’s also a job for other people. Personally, I have enough on my plate as it is, and politics is not my comparative advantage by any margin.
I’m not talking about guidelines for the wider AI community. I’m talking about guidelines for my own research (and presumably other alignment researchers would be interested in the same). The wider AI community doesn’t share my assumptions about AI risk. In particular, I believe that most of what they’re doing is actively harmful. Therefore, I don’t expect them to accept these guidelines, and I’m also mostly uninterested in their input. Moreover, it’s not the broader public that worries me, but precisely the broader AI community. It is from them that I want to withhold things.
Creating any sort of guidelines that the wider community would also accept is a different sort of challenge altogether. It’s also a job for other people. Personally, I have enough on my plate as it is, and politics is not my comparative advantage by any margin.