Thank you for clarifying! This highlights an assumption about AI so fundamental that I wasn’t previously fully aware that I had it. As you say, there’s a big difference between what to do if we discover AI, vs if we create it. While I think that we as a species are likely to create something that meets our definition of strong AI sooner or later, I consider it vanishingly unlikely that any specific individual or group who goes out trying to create it will actually succeed. So for most of us, especially myself, I figure that on an individual level it’ll be much more like discovering an AI that somebody else created (possibly by accident) than actually creating the thing.
It’s intuitively obvious why alignment work on creating AI doesn’t apply to extant systems. But if the best that the people who care most about it can do is work on created AI without yet applying any breakthroughs to the prospect of a discovered AI (where we can’t count on knowing how it works, ethically create and then destroy a bunch of instances of it, etc)… I think I am beginning to see where we get the meme of how one begins to think hard about these topics and shortly afterward spends a while being extremely frightened.
Thank you for clarifying! This highlights an assumption about AI so fundamental that I wasn’t previously fully aware that I had it. As you say, there’s a big difference between what to do if we discover AI, vs if we create it. While I think that we as a species are likely to create something that meets our definition of strong AI sooner or later, I consider it vanishingly unlikely that any specific individual or group who goes out trying to create it will actually succeed. So for most of us, especially myself, I figure that on an individual level it’ll be much more like discovering an AI that somebody else created (possibly by accident) than actually creating the thing.
It’s intuitively obvious why alignment work on creating AI doesn’t apply to extant systems. But if the best that the people who care most about it can do is work on created AI without yet applying any breakthroughs to the prospect of a discovered AI (where we can’t count on knowing how it works, ethically create and then destroy a bunch of instances of it, etc)… I think I am beginning to see where we get the meme of how one begins to think hard about these topics and shortly afterward spends a while being extremely frightened.