Meaningful progress on the idea of religion (which I understand as an artform or as a paradigm of normativity, in its benign aspects) is slow, and it doesn’t seem relevant to AI alignment, no more than most of the other content of civilization. It would be great to have a chance to make sense of it, and many other things, when there is more time.
Yes as I mention I view it as culture, which is similar as you say an artform...certainly in cultures it creates cultural norms...so we’re on a similar page there. And I can see how it might not seem relevant to AI alignment to those deeply involved in training work or other direct aspects—but what I’m hoping to consider is the idea that since 6 Billion humans find religion significant in their life, they as a giant force may help or come against AI development, the simple point is team humanity is making AI, and a bunch of our team members are going to definitely have an influence on the team’s winning or losing record. If I’m the coach I want to engage all the team members towards our goals. I think right now AI dev is kind of under the radar for a lot of society, it’s a small gang insulated by the general lack of awareness they are there, and this might make it seem religion has nothing to do with it, but the time will come when AI blows up to a bigger worldwide audience who will become more interested and potentially against it. I’m not only interested in this angle, I’m also very interested as a philosophical/theological thinker and activist in how down the line art, culture, religion, other content of civilization will be important to the very inner workings of AI dev. and thus AI alignment. If you don’t see it yet, I understand, but I’m pretty sure that day will come.
It’s part of content of civilization, and content of civilization is significant as a whole. Alignment is intended to ensure that civilization doesn’t end up getting discarded. Similarly, when developing a language model (an AI that learns and can use language), paying particular attention to the word “violin” is not a relevant thing to do, but that word is part of the language, and a sensible language model must in particular develop an aptitude in working with it.
Meaningful progress on the idea of religion (which I understand as an artform or as a paradigm of normativity, in its benign aspects) is slow, and it doesn’t seem relevant to AI alignment, no more than most of the other content of civilization. It would be great to have a chance to make sense of it, and many other things, when there is more time.
Here is I think the seminal quote of the piece—“There is no future scenario where 2/3’s of all humanity are not significant”
Yes as I mention I view it as culture, which is similar as you say an artform...certainly in cultures it creates cultural norms...so we’re on a similar page there. And I can see how it might not seem relevant to AI alignment to those deeply involved in training work or other direct aspects—but what I’m hoping to consider is the idea that since 6 Billion humans find religion significant in their life, they as a giant force may help or come against AI development, the simple point is team humanity is making AI, and a bunch of our team members are going to definitely have an influence on the team’s winning or losing record. If I’m the coach I want to engage all the team members towards our goals. I think right now AI dev is kind of under the radar for a lot of society, it’s a small gang insulated by the general lack of awareness they are there, and this might make it seem religion has nothing to do with it, but the time will come when AI blows up to a bigger worldwide audience who will become more interested and potentially against it. I’m not only interested in this angle, I’m also very interested as a philosophical/theological thinker and activist in how down the line art, culture, religion, other content of civilization will be important to the very inner workings of AI dev. and thus AI alignment. If you don’t see it yet, I understand, but I’m pretty sure that day will come.
It’s part of content of civilization, and content of civilization is significant as a whole. Alignment is intended to ensure that civilization doesn’t end up getting discarded. Similarly, when developing a language model (an AI that learns and can use language), paying particular attention to the word “violin” is not a relevant thing to do, but that word is part of the language, and a sensible language model must in particular develop an aptitude in working with it.