(I’m sorry that I don’t speak Japanese. I asked ChatGPT to translate this into Japanese)
People in the USA and the United Kingdom are still debating about this. Some people think that it is best to promote AI alignment, to tell as many people as possible. Other people think that it will cause problems if everyone knows about AI alignment, there is a risk that more people will try to be the first, and then more people will build AI quickly instead of safely.
Right now, everyone agrees that we should tell AI scientists and AI workers in Japan about AI alignment. I don’t know what the best strategy is, but I think one good strategy is this: we should have Japanese AI scientists and AI workers in Japan go out and introduce AI alignment to other Japanese AI scientists and AI workers.
These are two really good posts about ways to introduce people to AI alignment (unfortunately they are in english). Here in the US and the UK, we wish we had these things 10 years ago, when we started talking about AI alignment. The first is this post, which is the best thing to show to people to introduce them to AI alignment for the first time (it needs to be translated into japanese): https://www.lesswrong.com/posts/hXHRNhFgCEFZhbejp/the-best-way-so-far-to-explain-ai-risk-the-precipice-p-137
Hello and welcome, Mr. Kadoi,
(I’m sorry that I don’t speak Japanese. I asked ChatGPT to translate this into Japanese)
People in the USA and the United Kingdom are still debating about this. Some people think that it is best to promote AI alignment, to tell as many people as possible. Other people think that it will cause problems if everyone knows about AI alignment, there is a risk that more people will try to be the first, and then more people will build AI quickly instead of safely.
Right now, everyone agrees that we should tell AI scientists and AI workers in Japan about AI alignment. I don’t know what the best strategy is, but I think one good strategy is this: we should have Japanese AI scientists and AI workers in Japan go out and introduce AI alignment to other Japanese AI scientists and AI workers.
These are two really good posts about ways to introduce people to AI alignment (unfortunately they are in english). Here in the US and the UK, we wish we had these things 10 years ago, when we started talking about AI alignment. The first is this post, which is the best thing to show to people to introduce them to AI alignment for the first time (it needs to be translated into japanese): https://www.lesswrong.com/posts/hXHRNhFgCEFZhbejp/the-best-way-so-far-to-explain-ai-risk-the-precipice-p-137
The second is this post, which is the lessons one man learned after talking to 100 academics and scientists and introducing them to AI safety for the first time. This post is supposed to help people who will go out and talk about AI safety: https://forum.effectivealtruism.org/posts/kFufCHAmu7cwigH4B/lessons-learned-from-talking-to-greater-than-100-academics
I think it’s a good idea for more people to talk about AI alignment in japanese, so that more conversations can be in japanese instead of english.
日本語が話せなくて申し訳ありません。ChatGPTにこれを日本語に翻訳してもらいました。
アメリカとイギリスの人々は今でもこのことについて議論しています。ある人々は、AIアライメントを最大限に普及させ、できるだけ多くの人に伝えることが最善だと考えています。一方、他の人々は、AIアライメントについて誰もが知ることで問題が発生すると考えており、より多くの人々が最初になろうとしてリスクを抱え、そしてより多くの人々が安全ではなく、急いでAIを構築する可能性があると主張しています。
現時点では、AI科学者やAIワーカーが日本でAIアライメントについて知ることは、誰もが合意しています。最善の戦略はわかりませんが、良い戦略の1つは、日本のAI科学者やAIワーカーが他の日本のAI科学者やAIワーカーにAIアライメントを紹介することです。
これらは、人々にAIアライメントを紹介する方法についての非常に良い投稿です(残念ながら英語です)。私たちは、アメリカとイギリスで10年前にAIアライメントについて話し始めたときに、これらのものが欲しかったと思っています。最初のものは、初めてAIアライメントについて紹介する人々に最適な投稿であり、日本語に翻訳する必要があります。https://www.lesswrong.com/posts/hXHRNhFgCEFZhbejp/the-best-way-so-far-to-explain-ai-risk-the-precipice-p-137
2つ目は、ある男性が100人の学者や科学者と話して、彼らに初めてAI安全性を紹介した後に得た教訓についての投稿です。この投稿は、AI安全性について話す人々向けに作られています。https://forum.effectivealtruism.org/posts/kFufCHAmu7cwigH4B/lessons-learned-from-talking-to-greater-than-100-academics
私は日本語でのAIアライメントについての話し合いがもっと広がることが良いアイデアだと思います。そうすれば、英語ではなく日本語での会話が増えることになります。