It seems like FAI requires deeper math than UFAI, for some appropriate value of deeper. But this “trial and error” still requires some math. You could imagine a fictitious Earth where suddenly it becomes easy to learn enough to start messing around with neural nets and decision trees and metaheuristics (or something). In that Earth, AI risk is increased by improving math education in that particular weird way.
I am trying to ask whether, in our Earth, there is a clear direction AI risk goes given more plausible kinds of improvements in math education. Are you basically saying that the math for UFAI is easy enough already that not too many new cognitive resources, freed up by those improvements, would go towards UFAI? That doesn’t seem true...
Are you basically saying that the math for UFAI is easy enough already that not too many new cognitive resources, freed up by those improvements, would go towards UFAI?
I’d endorse that. But IME mathematical advances aren’t usually new ways to do the same things, they’re more often discoveries that it’s possible to do new things.
It seems like FAI requires deeper math than UFAI, for some appropriate value of deeper. But this “trial and error” still requires some math. You could imagine a fictitious Earth where suddenly it becomes easy to learn enough to start messing around with neural nets and decision trees and metaheuristics (or something). In that Earth, AI risk is increased by improving math education in that particular weird way.
I am trying to ask whether, in our Earth, there is a clear direction AI risk goes given more plausible kinds of improvements in math education. Are you basically saying that the math for UFAI is easy enough already that not too many new cognitive resources, freed up by those improvements, would go towards UFAI? That doesn’t seem true...
I’d endorse that. But IME mathematical advances aren’t usually new ways to do the same things, they’re more often discoveries that it’s possible to do new things.