Thanks, my aim is to contribute to MIRI research insomeway so I am skilling up in Maths in order to comprehend the various research papers I have come across.
On the other hand, if your aim is to contribute to creating AGI, then I’m honestly not sure if it’s even a good idea on net to offer advice in this direction.
This seems to be a common response. Are AGI researchers encouraged to not talk about their work, or is it the UFAI risk that makes people hesitant to discuss specifics?
It’s certainly the AI risk that makes me hesitant to discuss specifics (to the extent that I have any specifics to discuss). I don’t know anything about the broader AGI community (to the extent that there is such a thing) other than the small subset of it I’m aware of through parts of the AI risk community, so I wouldn’t be able to tell you what their norms are.
Thanks, my aim is to contribute to MIRI research in some way so I am skilling up in Maths in order to comprehend the various research papers I have come across.
This seems to be a common response. Are AGI researchers encouraged to not talk about their work, or is it the UFAI risk that makes people hesitant to discuss specifics?
It’s certainly the AI risk that makes me hesitant to discuss specifics (to the extent that I have any specifics to discuss). I don’t know anything about the broader AGI community (to the extent that there is such a thing) other than the small subset of it I’m aware of through parts of the AI risk community, so I wouldn’t be able to tell you what their norms are.