A lot of the people around me (e.g. who I speak to ~weekly) seem to be sensitive to both new news and new insights, adapting both their priorities and their level of optimism[1]. I think you’re right about some people. I don’t know what ‘lots of alignment folk’ means, and I’ve not considered the topic of other-people’s-update-rates-and-biases much.
For me, most changes route via governance.
I have made mainly very positive updates on governance in the last ~year, in part from public things and in part from private interactions.
Seemingly-mindkilled discourse on East-West competition provided me some negative updates, but recent signs of life from govts at e.g. the UK Safety Summit have undone those for now, maybe even going the other way.
I’ve adapted my own priorities in light of all of these (and I think this adaptation is much more important than what my P(doom) does).
Besides their second-order impact on Overton etc. I have made very few updates based on public research/deployment object-level since 2020. Nothing has been especially surprising.
From deeper study and personal insights, I’ve made some negative updates based on a better appreciation of multi-agent challenges since 2021 when I started to think they were neglected.
I could say other stuff about personal research/insights but they mainly change what I do/prioritise/say, not how pessimistic I am.
I’ve often thought that P(doom) is basically a distraction and what matters is how new news and insights affect your priorities. Of course, nevertheless, I presumably have a (revealed) P(doom) with some level of resolution.
A lot of the people around me (e.g. who I speak to ~weekly) seem to be sensitive to both new news and new insights, adapting both their priorities and their level of optimism[1]. I think you’re right about some people. I don’t know what ‘lots of alignment folk’ means, and I’ve not considered the topic of other-people’s-update-rates-and-biases much.
For me, most changes route via governance.
I have made mainly very positive updates on governance in the last ~year, in part from public things and in part from private interactions.
I’ve also made negative (evidential) updates based on the recent OpenAI kerfuffle (more weak evidence that Sam+OpenAI is misaligned; more evidence that org oversight doesn’t work well), though I think the causal fallout remains TBC.
Seemingly-mindkilled discourse on East-West competition provided me some negative updates, but recent signs of life from govts at e.g. the UK Safety Summit have undone those for now, maybe even going the other way.
I’ve adapted my own priorities in light of all of these (and I think this adaptation is much more important than what my P(doom) does).
Besides their second-order impact on Overton etc. I have made very few updates based on public research/deployment object-level since 2020. Nothing has been especially surprising.
From deeper study and personal insights, I’ve made some negative updates based on a better appreciation of multi-agent challenges since 2021 when I started to think they were neglected.
I could say other stuff about personal research/insights but they mainly change what I do/prioritise/say, not how pessimistic I am.
I’ve often thought that P(doom) is basically a distraction and what matters is how new news and insights affect your priorities. Of course, nevertheless, I presumably have a (revealed) P(doom) with some level of resolution.