Overall, I think that framing AI alignment as a problem is … erm .. problematic. The best parts of my existence as a human do not feel like the constant framing and resolution of problems. Rather they are filled with flow, curiosity, wonder, love.
I think we have to look in another direction, than trying to formulate and solve the “problems” of flow, curiosity, wonder, love. I have no simple answer—and stating a simple answer in language would reveal that there was a problem, a category, that could “solve” AI and human alignment problems.
I keep looking for interesting ideas—and find yours among the most fascinating to date.
Enjoyed this.
Overall, I think that framing AI alignment as a problem is … erm .. problematic. The best parts of my existence as a human do not feel like the constant framing and resolution of problems. Rather they are filled with flow, curiosity, wonder, love.
I think we have to look in another direction, than trying to formulate and solve the “problems” of flow, curiosity, wonder, love. I have no simple answer—and stating a simple answer in language would reveal that there was a problem, a category, that could “solve” AI and human alignment problems.
I keep looking for interesting ideas—and find yours among the most fascinating to date.