My first reaction was somewhat skeptical. But I think it’s actually good.
I don’t think Scott Aaronson will do much to directly solve AI alignment immediately. But blue-sky research is still valuable, and if there are interesting complexity theory problems related to e.g. interpretability vs. steganography, I think it’s great to encourage research on these questions.
My first reaction was somewhat skeptical. But I think it’s actually good.
I don’t think Scott Aaronson will do much to directly solve AI alignment immediately. But blue-sky research is still valuable, and if there are interesting complexity theory problems related to e.g. interpretability vs. steganography, I think it’s great to encourage research on these questions.