NancyLebovitz wasn’t suggesting that the risks of UFAI would be averted by legislation; rather, that such legislation would change the research landscape, and make it harder for SIAI to continue to do what it does—preparation would be warranted if such legislation were likely. I don’t think it’s likely enough to be worth dedicating thought and action to, especially thought and action which would otherwise go toward SIAI’s primary goals.
You’re probably right that there’s no practical thing to be done now. I’m sure you’re know very quickly if restrictions on independent AI research are being considered.
The more I think about it, the more I think a specialized self-optimizing AI (or several such, competing with each other) could do real damage to the financial markets, but I don’t know if there are precautions for that one.
Fictional evidence should be avoided. Also, this subject seems very prime for a moral panic, i.e., “these guys are making Terminator”.
how would it be stopped if it were illegal? unless information tech suddenly goes away it’s impossible.
NancyLebovitz wasn’t suggesting that the risks of UFAI would be averted by legislation; rather, that such legislation would change the research landscape, and make it harder for SIAI to continue to do what it does—preparation would be warranted if such legislation were likely. I don’t think it’s likely enough to be worth dedicating thought and action to, especially thought and action which would otherwise go toward SIAI’s primary goals.
Bingo. That’s exactly what I was concerned about.
You’re probably right that there’s no practical thing to be done now. I’m sure you’re know very quickly if restrictions on independent AI research are being considered.
The more I think about it, the more I think a specialized self-optimizing AI (or several such, competing with each other) could do real damage to the financial markets, but I don’t know if there are precautions for that one.