I’m really interested in AI and want to build something amazing, so I’m always looking to expand my imagination! Sure, research papers are full of ideas, but I feel like insights into more universal knowledge spark a different kind of creativity. I found LessWrong through things like LLM, but the posts here give me the joy of exploring a much broader world!
I’m deeply interested in the good and bad of AI. While aligning AI with human values is important, alignment can be defined in many ways. I have a bit of a goal to build up my thoughts on what’s right or wrong, what’s possible or impossible, and write about them.
I’m really interested in AI and want to build something amazing, so I’m always looking to expand my imagination! Sure, research papers are full of ideas, but I feel like insights into more universal knowledge spark a different kind of creativity. I found LessWrong through things like LLM, but the posts here give me the joy of exploring a much broader world!
I’m deeply interested in the good and bad of AI. While aligning AI with human values is important, alignment can be defined in many ways. I have a bit of a goal to build up my thoughts on what’s right or wrong, what’s possible or impossible, and write about them.