Unknown, let me rephrase. Suppose there exists an nonconscious AI, such that if you ask it what “two and two” is, it will say “four,” and if you ask it to solve a more complicated problem that involves “2+2=4” as an intermediary result, it will be able to solve the problem—and so on: the machine can successfully manipulate the information that 2+2=4 in all sorts of seemingly sophisticated ways. We might then find it prudent to say that “The AI is aware of (knows that) 2+2=4,” or to program the AI itself to say “I am aware that …” even if these statements are not exactly true if you (quite reasonably) define knowledge or awareness to require consciousness. If we wanted to be strict, we would program the AI to say, “This AI is able to manipulate the information that …” rather than “I know that …”—but rigorous use of language has long been sacrificed to the demands of user-friendly interfaces, and I see no reason why this should stop.
Maybe you’re right, and superintelligence implies consciousness. I don’t see why it would, but maybe it does. How would we know? I worry about how productive discussions about AI can be, if most of the participants are relying so heavily upon their intuitions, as we don’t have any crushing experimental evidence. I can’t think of any good reason reason why a hard takeoff is impossible—but how should I know without a rigorous technical argument, and—despite these last posts—why should I trust myself to reason without it?
Ben Jones, you’re right. I should have said “to what degree, if any” rather than “whether.”
Richard, if you’re seriously proposing that consciousness is a mistaken idea, but morality isn’t, I can only say that that has got to be one unique theory of morality.
We can already make computers “know” in that sense (lots of computer software uses the fact that 2+2=4 all the time!). It’s like when we say “The printer doesn’t know that you want to print double-sided”; it’s a shorthand, but it’s a really, really effective shorthand that clearly captures a lot of the important phenomena—which means that in a certain sense maybe it’s not a shorthand at all. Maybe a printer can actually “know” things in a certain non-conscious sense.
I can’t think of any good reason why a hard takeoff is IMPOSSIBLE either—just a lot of reasons why it’s really, really unlikely. No other technology has had a hard takeoff, Moore’s Law is empirically validated and does not predict a hard takeoff, most phenomena in nature grow exponentially and that does not allow for a hard takeoff, humans will be inventing the first few stages of AI and neither know how to nor desire to make a hard takeoff… At some point, it becomes like arguing “I can’t think of any reason why a nuclear bomb hitting my house tomorrow is impossible.” Well, no, it’s not impossible; it’s just so unlikely that there’s no point worrying about it.
Unknown, let me rephrase. Suppose there exists an nonconscious AI, such that if you ask it what “two and two” is, it will say “four,” and if you ask it to solve a more complicated problem that involves “2+2=4” as an intermediary result, it will be able to solve the problem—and so on: the machine can successfully manipulate the information that 2+2=4 in all sorts of seemingly sophisticated ways. We might then find it prudent to say that “The AI is aware of (knows that) 2+2=4,” or to program the AI itself to say “I am aware that …” even if these statements are not exactly true if you (quite reasonably) define knowledge or awareness to require consciousness. If we wanted to be strict, we would program the AI to say, “This AI is able to manipulate the information that …” rather than “I know that …”—but rigorous use of language has long been sacrificed to the demands of user-friendly interfaces, and I see no reason why this should stop.
Maybe you’re right, and superintelligence implies consciousness. I don’t see why it would, but maybe it does. How would we know? I worry about how productive discussions about AI can be, if most of the participants are relying so heavily upon their intuitions, as we don’t have any crushing experimental evidence. I can’t think of any good reason reason why a hard takeoff is impossible—but how should I know without a rigorous technical argument, and—despite these last posts—why should I trust myself to reason without it?
Ben Jones, you’re right. I should have said “to what degree, if any” rather than “whether.”
Richard, if you’re seriously proposing that consciousness is a mistaken idea, but morality isn’t, I can only say that that has got to be one unique theory of morality.
We can already make computers “know” in that sense (lots of computer software uses the fact that 2+2=4 all the time!). It’s like when we say “The printer doesn’t know that you want to print double-sided”; it’s a shorthand, but it’s a really, really effective shorthand that clearly captures a lot of the important phenomena—which means that in a certain sense maybe it’s not a shorthand at all. Maybe a printer can actually “know” things in a certain non-conscious sense.
I can’t think of any good reason why a hard takeoff is IMPOSSIBLE either—just a lot of reasons why it’s really, really unlikely. No other technology has had a hard takeoff, Moore’s Law is empirically validated and does not predict a hard takeoff, most phenomena in nature grow exponentially and that does not allow for a hard takeoff, humans will be inventing the first few stages of AI and neither know how to nor desire to make a hard takeoff… At some point, it becomes like arguing “I can’t think of any reason why a nuclear bomb hitting my house tomorrow is impossible.” Well, no, it’s not impossible; it’s just so unlikely that there’s no point worrying about it.