Glad you mentioned this. I find Bostrom’s reduction of art to the practical quite chilling! This sounds like a view of art from the perspective of a machine, or one who cannot feel. In fact it’s the first time I’ve ever heard art described this way. Yes, such an entity (I wouldn’t call them a person unless they are perhaps autistic) could only see UTILITY in art. According to my best definition of art [https://sites.google.com/site/relationalart/Home] –refined over a lifetime as a professional artist–art is necessarily anti-utilitarian. Perhaps I can’t see “utility” in art because that aspect is so thoroughly dwarfed by art’s monumental gifts of wonder, humor, pathos, depth, meaning, transformative alchemy, emotional uplift, spiritual renewal, etc. This entire catalog of wonders would be totally worthless to AI, which would prefer an endless grey jungle of straight lines.
such an entity [...] could only see UTILITY in art
I think you may be interpreting “utility” more narrowly than is customary here. The usual usage here is that “utility” is a catch-all term for everything one values. So if art provides me with wonder and humour and pathos and I value those (which, as it happens, it does and I do) then that’s positive utility for me. If art provides other people with wonder and humour and pathos and they like that and I want them to be happy (which, as it happens, it does and they do and I do) then that too is positive utility. If it provides other people with those things and it makes them better people and I care about that (which it does, and maybe it does, and I do) then that too is positive utility.
would be totally worthless to AI
To an AI that doesn’t care about those things, yes. To an AI that cares about those things, no. There’s no reason why an AI shouldn’t care about them. Of course at the moment we don’t understand them, or our reactions to them, well enough to make an AI that cares about them. But then, we can’t make an AI that recognizes ducks very well either.
An AI that could just play with it’s own reward circuitry might decide to prefer things it will frequently encounter without effort. Not necessarily grey straight lines, which are absent in my field of vision at the moment, but easygoing, laidback stuff.
Glad you mentioned this. I find Bostrom’s reduction of art to the practical quite chilling! This sounds like a view of art from the perspective of a machine, or one who cannot feel. In fact it’s the first time I’ve ever heard art described this way. Yes, such an entity (I wouldn’t call them a person unless they are perhaps autistic) could only see UTILITY in art. According to my best definition of art [https://sites.google.com/site/relationalart/Home] –refined over a lifetime as a professional artist–art is necessarily anti-utilitarian. Perhaps I can’t see “utility” in art because that aspect is so thoroughly dwarfed by art’s monumental gifts of wonder, humor, pathos, depth, meaning, transformative alchemy, emotional uplift, spiritual renewal, etc. This entire catalog of wonders would be totally worthless to AI, which would prefer an endless grey jungle of straight lines.
I think you may be interpreting “utility” more narrowly than is customary here. The usual usage here is that “utility” is a catch-all term for everything one values. So if art provides me with wonder and humour and pathos and I value those (which, as it happens, it does and I do) then that’s positive utility for me. If art provides other people with wonder and humour and pathos and they like that and I want them to be happy (which, as it happens, it does and they do and I do) then that too is positive utility. If it provides other people with those things and it makes them better people and I care about that (which it does, and maybe it does, and I do) then that too is positive utility.
To an AI that doesn’t care about those things, yes. To an AI that cares about those things, no. There’s no reason why an AI shouldn’t care about them. Of course at the moment we don’t understand them, or our reactions to them, well enough to make an AI that cares about them. But then, we can’t make an AI that recognizes ducks very well either.
Why do you suppose an AI would tend to prefer grey straight lines?
An AI that could just play with it’s own reward circuitry might decide to prefer things it will frequently encounter without effort. Not necessarily grey straight lines, which are absent in my field of vision at the moment, but easygoing, laidback stuff.