I’ve a developing a hunch that the abstract framing of arguments for AI Safety are unlikely to ever gain a foothold in Japan. The way forward here is in contextual framing of the same arguments. (Whether in English or Japanese is less and less relevant with machine translation.)
As a lawyer engaged with AI safety, I often have conversations with the more abstract-minded members of our groups that reveal an intellectual acceptance but strong aesthetic distaste for the contextual nature of legal systems. (The primitives of legal systems are abstraction-resistant ideas like ‘reasonableness’.)
Aesthetic distaste for contextual primitives leads to abstract framing of problems. Abstract framing of the AI safety issues tends to lead from standard AI premises to narrow conclusions that are often hard for contextual-minded people to follow. Conclusions like, we’ve found a very low-X percent chance of some very specific bad outcome, and so we logically need to take urgent preventative actions.
To generalize, Japan as a whole (and perhaps most of the world) does not approach problems abstractly. Contextual framing of AI safety issues tends to lead from standard AI premises to broad and easily accepted conclusions. Conclusions like, we’ve found a very high-Y chance of social disruption, and we are urgently compelled to take information-gathering actions.
There’s obviously much more support needed for these framing claims. But you can see those essential differences in outcomes in the AI regulatory approaches of the EU and Japan, respectively. (The EU is targeting abstract AI issues like bias, adversarial attacks, and biometrics with specific legislation. Japan is instead attempting to develop an ‘agile governance’ approach to AI in order to keep up with “the speed and complexity of AI innovation”. In this case, Japan’s approach seems wiser, to me.)
If the conclusions leading to existential risk are sound, both these framings should converge on similar actions and outcomes. Japan is a tough nut to crack. But having both framings active around the world would mobilize a significantly larger number of brains on the problem. Mobilizing all those brains in Japan is the course to chart now.
I have less experience in Japan than Harold does, but would generally advocate a grounded approach to issues of AI safety and alignment, rather than an abstract one.
I was perhaps most struck over the weekend that I did not speak to anyone who had actually been involved in developing or running safety-critical systems (aviation, nuclear energy, CBW...), on which lives depended. This gave a lot of the conversations the flavour of a ‘glass bead game’.
As Japan is famously risk-averse, it would seem to me—perhaps naively—that grounded arguments should land well here.
I wholeheartedly agree, Colin. (I think we’re saying the same thing—let me know where we may disagree.)
It’s a daily challenge in my work to ‘translate’ what can sometimes seem like abstract nonsense into scenarios grounded in real context, and the reverse.
I want to add that a grounded, high context decision process is slower (still wearing masks?) but significantly wiser (see the urbanism of Tokyo compared to any given US city).
I am under the impression that the public attitude towards AI safety / alignment is about to change significantly. Strategies that aim at informing parts of the public that may have been pointless in the past (abstract risks etc.) may now become more successful, because mainstream newspapers are now beginning to write about AI risks, people are beginning to be concerned. The abstract risks are becoming more concrete.
I’ve a developing a hunch that the abstract framing of arguments for AI Safety are unlikely to ever gain a foothold in Japan. The way forward here is in contextual framing of the same arguments. (Whether in English or Japanese is less and less relevant with machine translation.)
I’ve been a resident of Tokyo for twelve years, half of that as a NY lawyer in a Japanese international law firm. I’m also a founding member working with AI Safety 東京 and the Chair of the Tokyo rationality community. Shoka Kadoi, please express interest in our 勉強会.
As a lawyer engaged with AI safety, I often have conversations with the more abstract-minded members of our groups that reveal an intellectual acceptance but strong aesthetic distaste for the contextual nature of legal systems. (The primitives of legal systems are abstraction-resistant ideas like ‘reasonableness’.)
Aesthetic distaste for contextual primitives leads to abstract framing of problems. Abstract framing of the AI safety issues tends to lead from standard AI premises to narrow conclusions that are often hard for contextual-minded people to follow. Conclusions like, we’ve found a very low-X percent chance of some very specific bad outcome, and so we logically need to take urgent preventative actions.
To generalize, Japan as a whole (and perhaps most of the world) does not approach problems abstractly. Contextual framing of AI safety issues tends to lead from standard AI premises to broad and easily accepted conclusions. Conclusions like, we’ve found a very high-Y chance of social disruption, and we are urgently compelled to take information-gathering actions.
There’s obviously much more support needed for these framing claims. But you can see those essential differences in outcomes in the AI regulatory approaches of the EU and Japan, respectively. (The EU is targeting abstract AI issues like bias, adversarial attacks, and biometrics with specific legislation. Japan is instead attempting to develop an ‘agile governance’ approach to AI in order to keep up with “the speed and complexity of AI innovation”. In this case, Japan’s approach seems wiser, to me.)
If the conclusions leading to existential risk are sound, both these framings should converge on similar actions and outcomes. Japan is a tough nut to crack. But having both framings active around the world would mobilize a significantly larger number of brains on the problem. Mobilizing all those brains in Japan is the course to chart now.
I have less experience in Japan than Harold does, but would generally advocate a grounded approach to issues of AI safety and alignment, rather than an abstract one.
I was perhaps most struck over the weekend that I did not speak to anyone who had actually been involved in developing or running safety-critical systems (aviation, nuclear energy, CBW...), on which lives depended. This gave a lot of the conversations the flavour of a ‘glass bead game’.
As Japan is famously risk-averse, it would seem to me—perhaps naively—that grounded arguments should land well here.
I wholeheartedly agree, Colin. (I think we’re saying the same thing—let me know where we may disagree.)
It’s a daily challenge in my work to ‘translate’ what can sometimes seem like abstract nonsense into scenarios grounded in real context, and the reverse.
I want to add that a grounded, high context decision process is slower (still wearing masks?) but significantly wiser (see the urbanism of Tokyo compared to any given US city).
I am under the impression that the public attitude towards AI safety / alignment is about to change significantly.
Strategies that aim at informing parts of the public that may have been pointless in the past (abstract risks etc.) may now become more successful, because mainstream newspapers are now beginning to write about AI risks, people are beginning to be concerned. The abstract risks are becoming more concrete.