Oh, wow, it’s from 2012. Guess there’s not much point commenting on it, so I’ll actually reply substantively here.
The article heavily conflates “could work but is unethical” and “won’t work.” Let’s separate those two out, as hard as it is—just keep in mind that the halo/horns effect is at work here. Is building an FAI cruel to the AI?
First off, we’re creating a thinking artifact from scratch. There is no natural course of action—whatever desires it has are generated by a process we design and originate. One might argue, therefore (and some do) that it is immoral to create any conscious AI because it cannot consent to being brought into existence, cannot consent to having whatever values it ends up having. I think this position illustrates what it means to find the level of control we exert over AIs abhorrent.
Where does the line blur on how much control we have? If we augment a human being to be superintelligent, we don’t have that kind of control. An example that one can view either way is evolving an AI in a digital environment. On one hand, this method is hard to predict, so we’ll won’t really know what we’re going to get. On the other hand, we choose every parameter of its environment, and we choose to evolve an AI, knowing that evolution tends to spit out a certain kind of organism, rather than using some other method. Ultimately, whether you you group this with enhancing a human or with transparently-specified AI depends on your priorities about the world.
I see this might have been deleted, so I’ll stop here.
fnordfnordfnordfnord
Oh, wow, it’s from 2012. Guess there’s not much point commenting on it, so I’ll actually reply substantively here.
The article heavily conflates “could work but is unethical” and “won’t work.” Let’s separate those two out, as hard as it is—just keep in mind that the halo/horns effect is at work here. Is building an FAI cruel to the AI?
First off, we’re creating a thinking artifact from scratch. There is no natural course of action—whatever desires it has are generated by a process we design and originate. One might argue, therefore (and some do) that it is immoral to create any conscious AI because it cannot consent to being brought into existence, cannot consent to having whatever values it ends up having. I think this position illustrates what it means to find the level of control we exert over AIs abhorrent.
Where does the line blur on how much control we have? If we augment a human being to be superintelligent, we don’t have that kind of control. An example that one can view either way is evolving an AI in a digital environment. On one hand, this method is hard to predict, so we’ll won’t really know what we’re going to get. On the other hand, we choose every parameter of its environment, and we choose to evolve an AI, knowing that evolution tends to spit out a certain kind of organism, rather than using some other method. Ultimately, whether you you group this with enhancing a human or with transparently-specified AI depends on your priorities about the world.
I see this might have been deleted, so I’ll stop here.