I was less black-pilled when I wrote this—I also had the idea that though my own attempts to learn AI safety stuff had failed spectacularly perhaps I could encourage more gifted people to try the same. And given my skills or lack thereof, I was hoping this may be some way I could have an impact. As trying is the first filter. Though the world looks scarier now than when I wrote this, to those of high ability I would still say this: we are very close to a point where your genius will not be remarkable, where one can squeeze thoughts more beautiful and clear than you have any hope to achieve from a GPU. If there was ever a time to work on the actually important problems, it is surely now.
indeed. but the first superintelligences aren’t looking to be superagentic, which I’d note is a mild reassurance. the runway is short, but I think safety has liftoff. don’t lose hope just yet :)
I was less black-pilled when I wrote this—I also had the idea that though my own attempts to learn AI safety stuff had failed spectacularly perhaps I could encourage more gifted people to try the same. And given my skills or lack thereof, I was hoping this may be some way I could have an impact. As trying is the first filter. Though the world looks scarier now than when I wrote this, to those of high ability I would still say this: we are very close to a point where your genius will not be remarkable, where one can squeeze thoughts more beautiful and clear than you have any hope to achieve from a GPU. If there was ever a time to work on the actually important problems, it is surely now.
indeed. but the first superintelligences aren’t looking to be superagentic, which I’d note is a mild reassurance. the runway is short, but I think safety has liftoff. don’t lose hope just yet :)