I’m not saying that it’s not worth pursuing as an agenda, but I also am not convinced it is promising enough to justify pursuing math related AI capabilities, compared to e.g. creating safety guarantees into which you can plug in AI capabilities once they arise anyway.
But “creating safety guarantees into which you can plug in AI capabilities once they arise anyway” is the point, and it requires at least some non-trivial advances in AI capabilities.
I’m not saying that it’s not worth pursuing as an agenda, but I also am not convinced it is promising enough to justify pursuing math related AI capabilities, compared to e.g. creating safety guarantees into which you can plug in AI capabilities once they arise anyway.
But “creating safety guarantees into which you can plug in AI capabilities once they arise anyway” is the point, and it requires at least some non-trivial advances in AI capabilities.
You should probably read the current programme thesis.