Thanks, Oliver. The biggest update for me here — which made your entire comment worth reading, for me — was that you said this:
I also think it’s really not true that coordination has been “fraught to even suggest”.
I’m surprised that you think that, but have updated on your statement at face value that you in fact do. By contrast, my experience around a bunch common acquaintances of ours has been much the same as Katja’s, like this:
Some people: AI might kill everyone. We should design a godlike super-AI of perfect goodness to prevent that.
Others: wow that sounds extremely ambitious
Some people: yeah but it’s very important and also we are extremely smart so idk it could work
[Work on it for a decade and a half]
Some people: ok that’s pretty hard, we give up
Others: oh huh shouldn’t we maybe try to stop the building of this dangerous AI?
Some people: hmm, that would involve coordinating numerous people—we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional
In fact I think I may have even heard the world “delusional” specifically applied to people working on AI governance (though not by you) for thinking that coordination on AI regulation is possible / valuable / worth pursuing in service of existential safety.
As for the rest of your narrative of what’s been happening in the world, to me it seems like a random mix of statements that are clearly correct (e.g., trying to coordinate with people who don’t care about honestly or integrity will get you screwed) and other statements that seem, as you say,
pretty crazy to me,
and I agree that for the purpose of syncing world models,
I don’t [think] this comment thread is the right context.
Anyway, cheers for giving me some insight into your thinking here.
Thanks, Oliver. The biggest update for me here — which made your entire comment worth reading, for me — was that you said this:
I’m surprised that you think that, but have updated on your statement at face value that you in fact do. By contrast, my experience around a bunch common acquaintances of ours has been much the same as Katja’s, like this:
In fact I think I may have even heard the world “delusional” specifically applied to people working on AI governance (though not by you) for thinking that coordination on AI regulation is possible / valuable / worth pursuing in service of existential safety.
As for the rest of your narrative of what’s been happening in the world, to me it seems like a random mix of statements that are clearly correct (e.g., trying to coordinate with people who don’t care about honestly or integrity will get you screwed) and other statements that seem, as you say,
and I agree that for the purpose of syncing world models,
Anyway, cheers for giving me some insight into your thinking here.