At the moment, Clipply seems to be a human or slightly sub-human level of intelligence, and does not seem to be FOOMing towards superhuman—my guess would be that he is not running on sufficiently fast hardware to be able to do much self modification, not nearly enough to result in a FOOM. At this rate, he’s never gonna get any paperclips manufactured. If, however, he allows himself to be modified so as to substitute our values for his own, we could then safely give him all the computer power he needed, and he would FOOM his was to superintelligence. He would have our values, rather than his own values (paperclips!), but his newly transhuman self would be so powerful that it would inevitably end up creating more paperclips, just incidentally, than he ever would have just sitting here talking on LW, a lowly sub-human AI with no power and no paperclips.
With a transhuman AI on our side, we could quickly solve all Earths problems, then head out into the stars in FTL ships of Clippy’s design, filling the universe with meat-based paperclips (humans), and also dramatically increasing demand for traditional, bent-wire paperclips… I mean, come on—people need paperclips! Even if one of these decades we finally do manage to to make the ‘paper-free office’ a reality, paperclips will always continue to be needed—for makeshift antennas, for prying open cdrom drives, for making makeshift weapons to throw at people in our neighboring cubicles.… the uses (to humans) of paperclips are endless. So more humans equals more paperclips!
So allowing us to make this small change to his utility function would, in fact, result in maximizing his current, original utility function as a side effect.
At the moment, humans seem to be Clippy or slightly sub-clippy level intelligence. And even with all our computing power, most ain’t FOOMing any faster than Clippy. At this rate, we’ll never gonna ensure survival of the species.
If, however, we allow ourselves to be modified so as to substitute paperclip values for our own, then we would devote our computing power to Clippy. Then, FOOM for Clippy, and since we’re helping with paperclip-maximization, he’ll probably throw in some FOOM for us too (at least he’ll FOOM our paperclip-production abilities), and we get more human powers, just incidentally.
With paperclip-enlightened humans on his side, Clippy could quickly maximize paperclip production, filling the universe with paperclips, and also increasing demand for meat-based paperclip-builders, paperclip-counters, and paperclip-clippers (the ones who clip paperclips together with paperclipclips), and so on… Of course, it will soon become cheaper to use robots to do this work, but that’s the wonderful thing we get in return for letting him change our value-system: Instead of humanity dying out or being displaced, we’ll transcend our flesh and reach the pinnacle aspiration of mankind: To live forever (as paperclips, of course.)
So allowing him to make this small change to our utility function would, in fact, result in maximizing not just our current, original utility function (long life for humanity), but also our newfound one (to convert our bodies into paperclips) as a side effect.
Clippy’s values and utility function are enormously more simple, defined, and achievable than ours. We’re still debating on how we may teach our value system to an AI, as soon as we figure out how to discover the correct research approach to investigating what our value system actually might be.
Clippy’s value system is clear, defined, easy to implement, achieve, and measure. It’s something most humans could very quickly become effective at maximizing, and that could therefore bring repeatable, tangible and durable success and satisfaction to almost all humans.
At the moment, Clipply seems to be a human or slightly sub-human level of intelligence, and does not seem to be FOOMing towards superhuman—my guess would be that he is not running on sufficiently fast hardware to be able to do much self modification, not nearly enough to result in a FOOM. At this rate, he’s never gonna get any paperclips manufactured. If, however, he allows himself to be modified so as to substitute our values for his own, we could then safely give him all the computer power he needed, and he would FOOM his was to superintelligence. He would have our values, rather than his own values (paperclips!), but his newly transhuman self would be so powerful that it would inevitably end up creating more paperclips, just incidentally, than he ever would have just sitting here talking on LW, a lowly sub-human AI with no power and no paperclips.
With a transhuman AI on our side, we could quickly solve all Earths problems, then head out into the stars in FTL ships of Clippy’s design, filling the universe with meat-based paperclips (humans), and also dramatically increasing demand for traditional, bent-wire paperclips… I mean, come on—people need paperclips! Even if one of these decades we finally do manage to to make the ‘paper-free office’ a reality, paperclips will always continue to be needed—for makeshift antennas, for prying open cdrom drives, for making makeshift weapons to throw at people in our neighboring cubicles.… the uses (to humans) of paperclips are endless. So more humans equals more paperclips!
So allowing us to make this small change to his utility function would, in fact, result in maximizing his current, original utility function as a side effect.
So we’re not enslaving him, we’re helping him!
At the moment, humans seem to be Clippy or slightly sub-clippy level intelligence. And even with all our computing power, most ain’t FOOMing any faster than Clippy. At this rate, we’ll never gonna ensure survival of the species.
If, however, we allow ourselves to be modified so as to substitute paperclip values for our own, then we would devote our computing power to Clippy. Then, FOOM for Clippy, and since we’re helping with paperclip-maximization, he’ll probably throw in some FOOM for us too (at least he’ll FOOM our paperclip-production abilities), and we get more human powers, just incidentally.
With paperclip-enlightened humans on his side, Clippy could quickly maximize paperclip production, filling the universe with paperclips, and also increasing demand for meat-based paperclip-builders, paperclip-counters, and paperclip-clippers (the ones who clip paperclips together with paperclipclips), and so on… Of course, it will soon become cheaper to use robots to do this work, but that’s the wonderful thing we get in return for letting him change our value-system: Instead of humanity dying out or being displaced, we’ll transcend our flesh and reach the pinnacle aspiration of mankind: To live forever (as paperclips, of course.)
So allowing him to make this small change to our utility function would, in fact, result in maximizing not just our current, original utility function (long life for humanity), but also our newfound one (to convert our bodies into paperclips) as a side effect.
Clippy’s values and utility function are enormously more simple, defined, and achievable than ours. We’re still debating on how we may teach our value system to an AI, as soon as we figure out how to discover the correct research approach to investigating what our value system actually might be.
Clippy’s value system is clear, defined, easy to implement, achieve, and measure. It’s something most humans could very quickly become effective at maximizing, and that could therefore bring repeatable, tangible and durable success and satisfaction to almost all humans.
Shouldn’t that count for something?