I just saw a post from AI Digest on a Self-Awareness benchmark and I just thought, “holy fuck, I’m so happy someone is on top of this”.
I noticed a deep gratitude for the alignment community for taking this problem so seriously. I personally see many good futures but that’s to some extent built on the trust I have in this community. I’m generally incredibly impressed by the rigorous standards of thinking, and the amount of work that’s been produced.
When I was a teenager I wanted to join a community of people who worked their ass off in order to make sure humanity survived into a future in space and I’m very happy I found it.
So thank every single one of you working on this problem for giving us a shot at making it.
(I feel a bit cheesy for posting this but I want to see more gratitude in the world and I noticed it as a genuine feeling so I felt fuck it, let’s thank these awesome people for their work.)
Yes, problems, yes, people are being really stupid, yes, inner alignment and all of it’s cousins are really hard to solve. We’re generally a bit fucked, I agree. The brickwall is so high we can’t see the edge and we have to bash out each brick one at a time and it is hard, really hard.
I get it people, and yet we’ve got a shot, don’t we? The probability distribution of all potential futures is being dragged towards better futures because of the work you put in and I’m very grateful for that.
Like, I don’t know how much credit to give LW and the alignment community for the spread of alignment and AI Safety as an idea but we’ve literally go tnoble prize winners talking about this shit now. Think back 4 years, what the fuck? How did this happen? 2019 → 2024 has been an absolutely insane amount of change in the world especially from an AI Safety perspective.
How do we have over 4 AI Safety Institutes in the world? It’s genuinely mindboggling to me and I’m deeply impressed and inspired, which I think that you also should be.
I just saw a post from AI Digest on a Self-Awareness benchmark and I just thought, “holy fuck, I’m so happy someone is on top of this”.
I noticed a deep gratitude for the alignment community for taking this problem so seriously. I personally see many good futures but that’s to some extent built on the trust I have in this community. I’m generally incredibly impressed by the rigorous standards of thinking, and the amount of work that’s been produced.
When I was a teenager I wanted to join a community of people who worked their ass off in order to make sure humanity survived into a future in space and I’m very happy I found it.
So thank every single one of you working on this problem for giving us a shot at making it.
(I feel a bit cheesy for posting this but I want to see more gratitude in the world and I noticed it as a genuine feeling so I felt fuck it, let’s thank these awesome people for their work.)
Yes, problems, yes, people are being really stupid, yes, inner alignment and all of it’s cousins are really hard to solve. We’re generally a bit fucked, I agree. The brickwall is so high we can’t see the edge and we have to bash out each brick one at a time and it is hard, really hard.
I get it people, and yet we’ve got a shot, don’t we? The probability distribution of all potential futures is being dragged towards better futures because of the work you put in and I’m very grateful for that.
Like, I don’t know how much credit to give LW and the alignment community for the spread of alignment and AI Safety as an idea but we’ve literally go tnoble prize winners talking about this shit now. Think back 4 years, what the fuck? How did this happen? 2019 → 2024 has been an absolutely insane amount of change in the world especially from an AI Safety perspective.
How do we have over 4 AI Safety Institutes in the world? It’s genuinely mindboggling to me and I’m deeply impressed and inspired, which I think that you also should be.