Oh yeah—this is different in that it’s actually good! (In the sense that it was made with substantial skill and effort, and it appeals to my tastes.)
I’m not sure it’s actually helpful for AI safety, but I think popular art is going to play a substantial role in the public dialogue. AI doom is a compelling topic for pop art, logic aside.
2.0 is now my current favorite album; I’ve listened to it at least five times through since you recommended it. Thanks so much!! The electro-rock style does it for me. And I think the lyrics and music are well-written. Having each lyricist do only one song is an interesting approach that might raise quality.
It’s hard to say how much of it is directly written about AI risk, but all of it can be taken that way. Most of the songs can be taken as written from the perspective of a misaligned AGI with human-similar thinking and motivations. Which I find highly plausible, since I think language model agents are the most likely route to agi, and they’ll be curiously parahuman.
I’m glad you like it! I was listening to it for a while before I started reading lesswrong and AI risk content, and then one day I was listening to “Monster” and started paying attention to the lyrics and realised it was on the same topic.
It isn’t quite the same but the musician “Big Data” has made some fantastic songs about AI risk.
Oh yeah—this is different in that it’s actually good! (In the sense that it was made with substantial skill and effort, and it appeals to my tastes.)
I’m not sure it’s actually helpful for AI safety, but I think popular art is going to play a substantial role in the public dialogue. AI doom is a compelling topic for pop art, logic aside.
2.0 is now my current favorite album; I’ve listened to it at least five times through since you recommended it. Thanks so much!! The electro-rock style does it for me. And I think the lyrics and music are well-written. Having each lyricist do only one song is an interesting approach that might raise quality.
It’s hard to say how much of it is directly written about AI risk, but all of it can be taken that way. Most of the songs can be taken as written from the perspective of a misaligned AGI with human-similar thinking and motivations. Which I find highly plausible, since I think language model agents are the most likely route to agi, and they’ll be curiously parahuman.
I’m glad you like it! I was listening to it for a while before I started reading lesswrong and AI risk content, and then one day I was listening to “Monster” and started paying attention to the lyrics and realised it was on the same topic.