The risk with an AI is that it would be capable of changing humans in ways similar to the more dubious methods, while only using the “safe” methods.
I think what you’re saying makes sense, but I’m still on Dagon’s side. I’m not convinced this is uniquely an AI thing. It’s not like being a computer gives you charisma powers or makes you psychic—I think that basically comes down to breeding and exposure to toxic waste.
I’m not totally sure it’s an AI thing at all. When a lot of people talk about an AI, they seem to act as if they’re talking about “a being that can do tons of human things, but better.” It’s possible it could, but I don’t know if we have good evidence to assume AI would work like that.
A lot of parts of being human don’t seem to be visible from the outside, and current AI systems get caught in pretty superficial local minima when they try to analyze human behavior. If you think an AI could do the charisma schtick better than mere humans, it seems like you’d also have to assume the AI understands our social feelings better than we understand them.
We don’t know what the AI would be optimizing for and we don’t know how lumpy the gradient is, so I don’t think we have a foothold for solving this problem—and since finding that kind of foothold is probably an instance of the same intractable problem, I’m not convinced a really smart AI would have an advantage against us on solving us.
I think what you’re saying makes sense, but I’m still on Dagon’s side. I’m not convinced this is uniquely an AI thing. It’s not like being a computer gives you charisma powers or makes you psychic—I think that basically comes down to breeding and exposure to toxic waste.
I’m not totally sure it’s an AI thing at all. When a lot of people talk about an AI, they seem to act as if they’re talking about “a being that can do tons of human things, but better.” It’s possible it could, but I don’t know if we have good evidence to assume AI would work like that.
A lot of parts of being human don’t seem to be visible from the outside, and current AI systems get caught in pretty superficial local minima when they try to analyze human behavior. If you think an AI could do the charisma schtick better than mere humans, it seems like you’d also have to assume the AI understands our social feelings better than we understand them.
We don’t know what the AI would be optimizing for and we don’t know how lumpy the gradient is, so I don’t think we have a foothold for solving this problem—and since finding that kind of foothold is probably an instance of the same intractable problem, I’m not convinced a really smart AI would have an advantage against us on solving us.