[epistemic status: way too ill to be posting important things]
hi fellow people-who-i-think-have-much-of-the-plot
you two seem, from my perspective as having read a fair amount of content from both, to have a bunch of similar models and goals, but quite different strategies.
on top of both having a firm grip on the core x-risk arguments, you both call out similar dynamics in capabilities orgs capturing will to save the world and turning it into more capabilities progress[1], you both take issue with somewhat different but i think related parts of openphil’s grantmaking process, you both have high p(doom) and not very comfortable timelines, etc.
i suspect if connor explained why he was focusing on the things he is here, that would uncover the relevant difference. my current guess is connor is doing a kind of political alliancebuilding which is colliding with some of habryka’s highly active integrity reflexes.
maybe this doesn’t change much, these strategies do seem at least somewhat collision-y as implemented so far, but i hope our kind can get along.
[epistemic status: way too ill to be posting important things]
hi fellow people-who-i-think-have-much-of-the-plot
you two seem, from my perspective as having read a fair amount of content from both, to have a bunch of similar models and goals, but quite different strategies.
on top of both having a firm grip on the core x-risk arguments, you both call out similar dynamics in capabilities orgs capturing will to save the world and turning it into more capabilities progress[1], you both take issue with somewhat different but i think related parts of openphil’s grantmaking process, you both have high p(doom) and not very comfortable timelines, etc.
i suspect if connor explained why he was focusing on the things he is here, that would uncover the relevant difference. my current guess is connor is doing a kind of political alliancebuilding which is colliding with some of habryka’s highly active integrity reflexes.
maybe this doesn’t change much, these strategies do seem at least somewhat collision-y as implemented so far, but i hope our kind can get along.
e.g. “Turning care into acceleration” from https://www.thecompendium.ai/the-ai-race#these-ideologies-shape-the-playing-field
e.g. https://www.lesswrong.com/posts/h4wXMXneTPDEjJ7nv/a-rocket-interpretability-analogy?commentId=md7QvniMyx3vYqeyD and lots of calling out Anthropic