And the biggest question for me is not, is AI going to doom the world? Can I work on this in order to save the world? A lot of people expect that would be the question. That’s not at all the question. The question for me is, is there a concrete problem that I can make progress on? Because in science, it’s not sufficient for a problem to be enormously important. It has to be tractable. There has to be a way to make progress. And this was why I kept it at arm’s length for as long as I did.
I thought this was interesting. But it does feel like with this AI thing we need more people backchaining from the goal of saving humanity instead of only looking forward to see what tractable neat research questions present themselves.
I thought this was interesting. But it does feel like with this AI thing we need more people backchaining from the goal of saving humanity instead of only looking forward to see what tractable neat research questions present themselves.