Can you clarify what you mean by working in AGI? Is your aim, for example, to contribute to MIRI research? In that case, at least at the moment it seems like a good idea to learn some mathematical logic (enough to understand the incompleteness theorem, say) and some basic probability. On the other hand, if your aim is to contribute to creating AGI, then I’m honestly not sure if it’s even a good idea on net to offer advice in this direction.
On the other hand, if your aim is to contribute to creating AGI, then I’m honestly not sure if it’s even a good idea on net to offer advice in this direction.
Translation: if we tell you, we may have to kill you later
Thanks, my aim is to contribute to MIRI research insomeway so I am skilling up in Maths in order to comprehend the various research papers I have come across.
On the other hand, if your aim is to contribute to creating AGI, then I’m honestly not sure if it’s even a good idea on net to offer advice in this direction.
This seems to be a common response. Are AGI researchers encouraged to not talk about their work, or is it the UFAI risk that makes people hesitant to discuss specifics?
It’s certainly the AI risk that makes me hesitant to discuss specifics (to the extent that I have any specifics to discuss). I don’t know anything about the broader AGI community (to the extent that there is such a thing) other than the small subset of it I’m aware of through parts of the AI risk community, so I wouldn’t be able to tell you what their norms are.
Can you clarify what you mean by working in AGI? Is your aim, for example, to contribute to MIRI research? In that case, at least at the moment it seems like a good idea to learn some mathematical logic (enough to understand the incompleteness theorem, say) and some basic probability. On the other hand, if your aim is to contribute to creating AGI, then I’m honestly not sure if it’s even a good idea on net to offer advice in this direction.
Translation: if we tell you, we may have to kill you later
:-D
Thanks, my aim is to contribute to MIRI research in some way so I am skilling up in Maths in order to comprehend the various research papers I have come across.
This seems to be a common response. Are AGI researchers encouraged to not talk about their work, or is it the UFAI risk that makes people hesitant to discuss specifics?
It’s certainly the AI risk that makes me hesitant to discuss specifics (to the extent that I have any specifics to discuss). I don’t know anything about the broader AGI community (to the extent that there is such a thing) other than the small subset of it I’m aware of through parts of the AI risk community, so I wouldn’t be able to tell you what their norms are.