What I’m trying to say is that it’s much harder to do AI alignment research while models are still small, so TAI timelines somewhat dictate the progress of AI alignment research. If I wanted my 5 year plan to have the best chance at success, I would have “test this on a dog-intelligence-level AI” in my plan, even if I thought that probably wouldn’t arrive by 2036, because that would make AI alignment research much easier.
What I’m trying to say is that it’s much harder to do AI alignment research while models are still small, so TAI timelines somewhat dictate the progress of AI alignment research. If I wanted my 5 year plan to have the best chance at success, I would have “test this on a dog-intelligence-level AI” in my plan, even if I thought that probably wouldn’t arrive by 2036, because that would make AI alignment research much easier.