Oh, sure, ant colonies are optimization processes too. But there are a few criteria by which we can distinguish the danger of an ant colony from the danger of a human from the danger of an AGI. For example:
(1) How powerful is the optimization processâ how tiny is the target it can achieve? A sophisticated spambot might reliably achieve proper English sentences, but I work towards a much smaller target (namely, a coherent conversation) which the spambot couldn’t reliably hit.
Not counting the production of individual ants (which is the result of a much larger optimization process of evolution), the ant colony is able to achieve a certain social structure in the colony and to establish the same in a new colony. That’s nice, but not really as powerful as it gets when compared to humans painting the Mona Lisa or building rockets.
(2) What are the goals of the process? An automated automobile plant is pretty powerful at hitting a small target (a constructed car of a particular sort, out of raw materials), but we don’t worry about it because there’s no sense in which the plant is trying to expand, reproduce itself, threaten humans, etc.
(3) Is the operation of the process going to change either of the above? This is, so far, only partially true for some advanced biological intelligences and some rudimentary machine ones (not counting the slow improvements of ant colonies under evolution); but a self-modifying AI has the potential to alter (1) and (2) dramatically in a short period of time.
Can you at least accept that a smarter-than-human AI able to self-modify would exceed anything we’ve yet seen on properties (1) and (3)? That’s why the SIAI hopes to get (2) right, even given (3).
Caledonian,
Oh, sure, ant colonies are optimization processes too. But there are a few criteria by which we can distinguish the danger of an ant colony from the danger of a human from the danger of an AGI. For example:
(1) How powerful is the optimization processâ how tiny is the target it can achieve? A sophisticated spambot might reliably achieve proper English sentences, but I work towards a much smaller target (namely, a coherent conversation) which the spambot couldn’t reliably hit.
Not counting the production of individual ants (which is the result of a much larger optimization process of evolution), the ant colony is able to achieve a certain social structure in the colony and to establish the same in a new colony. That’s nice, but not really as powerful as it gets when compared to humans painting the Mona Lisa or building rockets.
(2) What are the goals of the process? An automated automobile plant is pretty powerful at hitting a small target (a constructed car of a particular sort, out of raw materials), but we don’t worry about it because there’s no sense in which the plant is trying to expand, reproduce itself, threaten humans, etc.
(3) Is the operation of the process going to change either of the above? This is, so far, only partially true for some advanced biological intelligences and some rudimentary machine ones (not counting the slow improvements of ant colonies under evolution); but a self-modifying AI has the potential to alter (1) and (2) dramatically in a short period of time.
Can you at least accept that a smarter-than-human AI able to self-modify would exceed anything we’ve yet seen on properties (1) and (3)? That’s why the SIAI hopes to get (2) right, even given (3).