Question: Say someone dramatically increased the rate at which humans can learn mathematics (over, say, the Internet). Assume also that an intelligence explosion is likely to occur in the next century, it will be a singleton, and the way it is constructed determines the future for earth-originating life. Does the increase in math learning ability make that intelligence explosion more or less likely to be friendly?
Responses I’ve heard to questions of the form, “Does solving problem X help or hinder safe AGI vs. unsafe AGI?”:
Improvements in rationality help safe AI, because sufficiently rational humans usually become unlikely to create unsafe AI. Most other improvements are a wash, because they help safe AI and unsafe AI equally.
Almost any improvement in productivity will slightly help safe AI, because more productive humans have more unconstrained time (i.e. time not spent paying the bills). Humans tend to do more good things and move towards rationality in their less constrained time, so increasing that time is a net win.
Not sure how I feel about these responses. But neither of them directly answers the question about math.
One answer would be that improving higher math education would be a net win because safe AI will definitely require hard math, whereas improving all math education would be a net loss because, like Moore’s Law, it would increase cognitive resources across the board, pushing the timeline further up. Note that if we ignore network effects (researchers talking to researchers, convincing them to not work on unsafe AI), the question becomes: Is the effect of improving X more like shifting the timeline forward by Y years, as in increasing computing power, or is it more like stretching the timeline by some linear factor, as in increasing human productivity? Thoughts?
It seems like FAI requires deeper math than UFAI, for some appropriate value of deeper. But this “trial and error” still requires some math. You could imagine a fictitious Earth where suddenly it becomes easy to learn enough to start messing around with neural nets and decision trees and metaheuristics (or something). In that Earth, AI risk is increased by improving math education in that particular weird way.
I am trying to ask whether, in our Earth, there is a clear direction AI risk goes given more plausible kinds of improvements in math education. Are you basically saying that the math for UFAI is easy enough already that not too many new cognitive resources, freed up by those improvements, would go towards UFAI? That doesn’t seem true...
Are you basically saying that the math for UFAI is easy enough already that not too many new cognitive resources, freed up by those improvements, would go towards UFAI?
I’d endorse that. But IME mathematical advances aren’t usually new ways to do the same things, they’re more often discoveries that it’s possible to do new things.
Question: Say someone dramatically increased the rate at which humans can learn mathematics (over, say, the Internet). Assume also that an intelligence explosion is likely to occur in the next century, it will be a singleton, and the way it is constructed determines the future for earth-originating life. Does the increase in math learning ability make that intelligence explosion more or less likely to be friendly?
Responses I’ve heard to questions of the form, “Does solving problem X help or hinder safe AGI vs. unsafe AGI?”:
Improvements in rationality help safe AI, because sufficiently rational humans usually become unlikely to create unsafe AI. Most other improvements are a wash, because they help safe AI and unsafe AI equally.
Almost any improvement in productivity will slightly help safe AI, because more productive humans have more unconstrained time (i.e. time not spent paying the bills). Humans tend to do more good things and move towards rationality in their less constrained time, so increasing that time is a net win.
Not sure how I feel about these responses. But neither of them directly answers the question about math.
One answer would be that improving higher math education would be a net win because safe AI will definitely require hard math, whereas improving all math education would be a net loss because, like Moore’s Law, it would increase cognitive resources across the board, pushing the timeline further up. Note that if we ignore network effects (researchers talking to researchers, convincing them to not work on unsafe AI), the question becomes: Is the effect of improving X more like shifting the timeline forward by Y years, as in increasing computing power, or is it more like stretching the timeline by some linear factor, as in increasing human productivity? Thoughts?
I would think that FAI requires mathematics a lot more than does UFAI, which can be created through trial and error.
It seems like FAI requires deeper math than UFAI, for some appropriate value of deeper. But this “trial and error” still requires some math. You could imagine a fictitious Earth where suddenly it becomes easy to learn enough to start messing around with neural nets and decision trees and metaheuristics (or something). In that Earth, AI risk is increased by improving math education in that particular weird way.
I am trying to ask whether, in our Earth, there is a clear direction AI risk goes given more plausible kinds of improvements in math education. Are you basically saying that the math for UFAI is easy enough already that not too many new cognitive resources, freed up by those improvements, would go towards UFAI? That doesn’t seem true...
I’d endorse that. But IME mathematical advances aren’t usually new ways to do the same things, they’re more often discoveries that it’s possible to do new things.