I’m saying that just because we know algorithms that will successfully leverage data and compute to set off an intelligence explosion (...ok I just realized you wrote TAI but IDK what anyone means by anything other than actual AGI), doesn’t mean we know much about how they leverage it and how that influences the explody-guy’s long-term goals.
I assume that current efforts in AI evals and AI interpretability will be pretty useless if we have very different infrastructures in 10 years. For example, I’m not sure how much LLM interp helps with o1-style high-level reasoning.
I also think that later AI could help us do research. So if the idea is that we could do high-level strategic reasoning to find strategies that aren’t specific to specific models/architectures, I assume we could do that reasoning much better with better AI.
I’m saying that just because we know algorithms that will successfully leverage data and compute to set off an intelligence explosion (...ok I just realized you wrote TAI but IDK what anyone means by anything other than actual AGI), doesn’t mean we know much about how they leverage it and how that influences the explody-guy’s long-term goals.
I assume that current efforts in AI evals and AI interpretability will be pretty useless if we have very different infrastructures in 10 years. For example, I’m not sure how much LLM interp helps with o1-style high-level reasoning.
I also think that later AI could help us do research. So if the idea is that we could do high-level strategic reasoning to find strategies that aren’t specific to specific models/architectures, I assume we could do that reasoning much better with better AI.