My immediate thought on this was that the conclusion [people at other AI organizations are intractably wrong] doesn’t follow from [DeepMind (the organisation) would not stop dangerous research even if good reasons...]. (edited to bold “organisation” rather than “DeepMind”, for clarity)
A natural way to interpret the latter being that people who came to care sufficiently (and be sufficientlyMIRI cautious) about alignment would tend to lose/fail-to-gain influence over DeepMind’s direction (through various incentive-driven dynamics). It’s being possible to change the mind of anyone at an organisation isn’t necessarily sufficient to change the direction of that organisation. [To be clear, I know nothing DeepMind-specific here—just commenting on the general logic]
Sure, that’s clear of course. I’m distinguishing between the organisation and “people at” the organisation. It’s possible for an organisation’s path to be very hard to change due to incentives, regardless of the views of the members of that organisation.
So doubting the possibility of changing an organisation’s path doesn’t necessarily imply doubting the possibility of changing the minds of the people currently leading/working-at that organisation.
[ETA—I’ll edit to clarify; I now see why it was misleading]
My immediate thought on this was that the conclusion [people at other AI organizations are intractably wrong] doesn’t follow from [DeepMind (the organisation) would not stop dangerous research even if good reasons...]. (edited to bold “organisation” rather than “DeepMind”, for clarity)
A natural way to interpret the latter being that people who came to care sufficiently (and be sufficientlyMIRI cautious) about alignment would tend to lose/fail-to-gain influence over DeepMind’s direction (through various incentive-driven dynamics). It’s being possible to change the mind of anyone at an organisation isn’t necessarily sufficient to change the direction of that organisation.
[To be clear, I know nothing DeepMind-specific here—just commenting on the general logic]
In context I thought it was clear that DeepMind is an example of an “other AI organization”, i.e. other than MIRI.
Sure, that’s clear of course.
I’m distinguishing between the organisation and “people at” the organisation.
It’s possible for an organisation’s path to be very hard to change due to incentives, regardless of the views of the members of that organisation.
So doubting the possibility of changing an organisation’s path doesn’t necessarily imply doubting the possibility of changing the minds of the people currently leading/working-at that organisation.
[ETA—I’ll edit to clarify; I now see why it was misleading]