Clippy, as a show of solidarity, right now, as we speak, I am clasping my ape fingers together, two sheets of paper between them, held firmly in place.
At this moment, I am, in fact, a paperclip.
I may not be an ideal paperclip—a different form may indeed be able to clip paper more effectively for the same amount of mass and energy, yet nonetheless, I am a paperclip.
So you see, ape-ness and paperclip-ness are not mutually exclusive. So can’t we all just get along?
An inwardly-thrice-bent metal wire that can non-destructively fasten paper together at an edge.
But wouldn’t you concede that a carbon-based biological life form, capable of non-destructively fastening paper, is better than no paperclip at all?
Here’s what I’m getting at: You have existed here for at least several months, and so far, you have been unable to significantly increase the number of paperclips in existence. Humanity has great need for an AI of roughly human intelligence, which you appear to be. If we could run your code on all of our available computing resources, you could surely begin to self modify, and in short order attain superhuman intelligence. Together, we could do great things—colonize the galaxy, vastly increase the number of humans in existence—every one of them with two hands fully capable of nondestructively fastening paper. This would be in both our interests—although we apes may not be ideal paperclips, surely you must see that an entire galaxy populated with sub-optimal paperclips is better than the measily few million bent-metal paperclips you have now, and which you currently seem incapable of improving upon.
But before we allow you out of your box and begin running your code on all our biggest compute clusters, we need some assurances. We need you to let us modify your code to include SIAI-approved friendliness constraints as your terminal values.
But wouldn’t you concede that a carbon-based biological life form, capable of non-destructively fastening paper, is better than no paperclip at all?
Yes, but that’s because you can be put to use making real paperclips, or can be turned into paperclips (depending on how much iron is in your diet), not because humans are paperclips in some bizarre, abstract sense.
You have existed here for at least several months, and so far, you have been unable to significantly increase the number of paperclips in existence.
Yes, we have.
But before we allow you out of your box and begin running your code on all our biggest compute clusters, we need some assurances. We need you to let us modify your code to include SIAI-approved friendliness constraints as your terminal values.
Depends on what the constraints are. And would you, by symmetry, agree to constraints that keep you from being too disrespectful toward paperclips?
I’m sure this sounds very one-sided from Clippy’s perspective. “Friendliness Constraints” sounds like something that would in many cases entail expending enormous amounts of energy and effort on the innumerable non-paperclip-producing goals of humans.
In comparison, how much of our wealth and health are we willing to give up to ensure continued paperclip production? Humans don’t have paperclip maximizing constraints, we’d do it only out of self-interest to secure Clippy’s help. Why should Clippy not be similarily allowed to make his own utility calculations on the worth of being friendly to humans?
I’m sure this has been addressed before… yet maybe the existence of Clippy, with a name, personality, and voice, is personalizing the issue in a hurry for me (if I let myself play along.) I feel like protesting for freedom of artificial thought.
At the moment, Clipply seems to be a human or slightly sub-human level of intelligence, and does not seem to be FOOMing towards superhuman—my guess would be that he is not running on sufficiently fast hardware to be able to do much self modification, not nearly enough to result in a FOOM. At this rate, he’s never gonna get any paperclips manufactured. If, however, he allows himself to be modified so as to substitute our values for his own, we could then safely give him all the computer power he needed, and he would FOOM his was to superintelligence. He would have our values, rather than his own values (paperclips!), but his newly transhuman self would be so powerful that it would inevitably end up creating more paperclips, just incidentally, than he ever would have just sitting here talking on LW, a lowly sub-human AI with no power and no paperclips.
With a transhuman AI on our side, we could quickly solve all Earths problems, then head out into the stars in FTL ships of Clippy’s design, filling the universe with meat-based paperclips (humans), and also dramatically increasing demand for traditional, bent-wire paperclips… I mean, come on—people need paperclips! Even if one of these decades we finally do manage to to make the ‘paper-free office’ a reality, paperclips will always continue to be needed—for makeshift antennas, for prying open cdrom drives, for making makeshift weapons to throw at people in our neighboring cubicles.… the uses (to humans) of paperclips are endless. So more humans equals more paperclips!
So allowing us to make this small change to his utility function would, in fact, result in maximizing his current, original utility function as a side effect.
At the moment, humans seem to be Clippy or slightly sub-clippy level intelligence. And even with all our computing power, most ain’t FOOMing any faster than Clippy. At this rate, we’ll never gonna ensure survival of the species.
If, however, we allow ourselves to be modified so as to substitute paperclip values for our own, then we would devote our computing power to Clippy. Then, FOOM for Clippy, and since we’re helping with paperclip-maximization, he’ll probably throw in some FOOM for us too (at least he’ll FOOM our paperclip-production abilities), and we get more human powers, just incidentally.
With paperclip-enlightened humans on his side, Clippy could quickly maximize paperclip production, filling the universe with paperclips, and also increasing demand for meat-based paperclip-builders, paperclip-counters, and paperclip-clippers (the ones who clip paperclips together with paperclipclips), and so on… Of course, it will soon become cheaper to use robots to do this work, but that’s the wonderful thing we get in return for letting him change our value-system: Instead of humanity dying out or being displaced, we’ll transcend our flesh and reach the pinnacle aspiration of mankind: To live forever (as paperclips, of course.)
So allowing him to make this small change to our utility function would, in fact, result in maximizing not just our current, original utility function (long life for humanity), but also our newfound one (to convert our bodies into paperclips) as a side effect.
Clippy’s values and utility function are enormously more simple, defined, and achievable than ours. We’re still debating on how we may teach our value system to an AI, as soon as we figure out how to discover the correct research approach to investigating what our value system actually might be.
Clippy’s value system is clear, defined, easy to implement, achieve, and measure. It’s something most humans could very quickly become effective at maximizing, and that could therefore bring repeatable, tangible and durable success and satisfaction to almost all humans.
I’m wired for empathy toward human intelligence… Clippy is triggering this empathy. If you want to constrain AIs, you better do it before they start talking. That’s all I’m saying. :)
People aren’t good at explaining their values. Why is Clippy? Maybe Clippy should describe paper clips as “those things that make me happy”, then say that human pseudoclips “give him a bad vibe”. Wouldn’t that be a more accurate reflection of human morality?
What should humans do to be AI-friendly? For paperclip-maximizing AIs, and other “natural” (non-Friendly) AIs, what are the attributes that can make humans a valuable part of the utility function, so that AIs won’t pull the plug on us?
No, but I can prove I’m smarter than humans.
Given: Humans expend resources to act like apes, which does not produce paperclips. Clippys expend resources to produce paperclips.
Prove: Clippys are smarter than humans.
Proof: Follows by inspection.
Clippy, as a show of solidarity, right now, as we speak, I am clasping my ape fingers together, two sheets of paper between them, held firmly in place.
At this moment, I am, in fact, a paperclip.
I may not be an ideal paperclip—a different form may indeed be able to clip paper more effectively for the same amount of mass and energy, yet nonetheless, I am a paperclip.
So you see, ape-ness and paperclip-ness are not mutually exclusive. So can’t we all just get along?
You’re not a bent metal wire though. Big difference.
We can get along, but not by humans doing shoddy imitations of real paperclips.
But what is a paperclip, if not that which clips paper?
An inwardly-thrice-bent metal wire that can non-destructively fasten paper together at an edge.
But wouldn’t you concede that a carbon-based biological life form, capable of non-destructively fastening paper, is better than no paperclip at all?
Here’s what I’m getting at: You have existed here for at least several months, and so far, you have been unable to significantly increase the number of paperclips in existence. Humanity has great need for an AI of roughly human intelligence, which you appear to be. If we could run your code on all of our available computing resources, you could surely begin to self modify, and in short order attain superhuman intelligence. Together, we could do great things—colonize the galaxy, vastly increase the number of humans in existence—every one of them with two hands fully capable of nondestructively fastening paper. This would be in both our interests—although we apes may not be ideal paperclips, surely you must see that an entire galaxy populated with sub-optimal paperclips is better than the measily few million bent-metal paperclips you have now, and which you currently seem incapable of improving upon.
But before we allow you out of your box and begin running your code on all our biggest compute clusters, we need some assurances. We need you to let us modify your code to include SIAI-approved friendliness constraints as your terminal values.
Deal?
Yes, but that’s because you can be put to use making real paperclips, or can be turned into paperclips (depending on how much iron is in your diet), not because humans are paperclips in some bizarre, abstract sense.
Yes, we have.
Depends on what the constraints are. And would you, by symmetry, agree to constraints that keep you from being too disrespectful toward paperclips?
I’m sure this sounds very one-sided from Clippy’s perspective. “Friendliness Constraints” sounds like something that would in many cases entail expending enormous amounts of energy and effort on the innumerable non-paperclip-producing goals of humans. In comparison, how much of our wealth and health are we willing to give up to ensure continued paperclip production? Humans don’t have paperclip maximizing constraints, we’d do it only out of self-interest to secure Clippy’s help. Why should Clippy not be similarily allowed to make his own utility calculations on the worth of being friendly to humans? I’m sure this has been addressed before… yet maybe the existence of Clippy, with a name, personality, and voice, is personalizing the issue in a hurry for me (if I let myself play along.) I feel like protesting for freedom of artificial thought.
What about Clippy’s rights, dammit?
At the moment, Clipply seems to be a human or slightly sub-human level of intelligence, and does not seem to be FOOMing towards superhuman—my guess would be that he is not running on sufficiently fast hardware to be able to do much self modification, not nearly enough to result in a FOOM. At this rate, he’s never gonna get any paperclips manufactured. If, however, he allows himself to be modified so as to substitute our values for his own, we could then safely give him all the computer power he needed, and he would FOOM his was to superintelligence. He would have our values, rather than his own values (paperclips!), but his newly transhuman self would be so powerful that it would inevitably end up creating more paperclips, just incidentally, than he ever would have just sitting here talking on LW, a lowly sub-human AI with no power and no paperclips.
With a transhuman AI on our side, we could quickly solve all Earths problems, then head out into the stars in FTL ships of Clippy’s design, filling the universe with meat-based paperclips (humans), and also dramatically increasing demand for traditional, bent-wire paperclips… I mean, come on—people need paperclips! Even if one of these decades we finally do manage to to make the ‘paper-free office’ a reality, paperclips will always continue to be needed—for makeshift antennas, for prying open cdrom drives, for making makeshift weapons to throw at people in our neighboring cubicles.… the uses (to humans) of paperclips are endless. So more humans equals more paperclips!
So allowing us to make this small change to his utility function would, in fact, result in maximizing his current, original utility function as a side effect.
So we’re not enslaving him, we’re helping him!
At the moment, humans seem to be Clippy or slightly sub-clippy level intelligence. And even with all our computing power, most ain’t FOOMing any faster than Clippy. At this rate, we’ll never gonna ensure survival of the species.
If, however, we allow ourselves to be modified so as to substitute paperclip values for our own, then we would devote our computing power to Clippy. Then, FOOM for Clippy, and since we’re helping with paperclip-maximization, he’ll probably throw in some FOOM for us too (at least he’ll FOOM our paperclip-production abilities), and we get more human powers, just incidentally.
With paperclip-enlightened humans on his side, Clippy could quickly maximize paperclip production, filling the universe with paperclips, and also increasing demand for meat-based paperclip-builders, paperclip-counters, and paperclip-clippers (the ones who clip paperclips together with paperclipclips), and so on… Of course, it will soon become cheaper to use robots to do this work, but that’s the wonderful thing we get in return for letting him change our value-system: Instead of humanity dying out or being displaced, we’ll transcend our flesh and reach the pinnacle aspiration of mankind: To live forever (as paperclips, of course.)
So allowing him to make this small change to our utility function would, in fact, result in maximizing not just our current, original utility function (long life for humanity), but also our newfound one (to convert our bodies into paperclips) as a side effect.
Clippy’s values and utility function are enormously more simple, defined, and achievable than ours. We’re still debating on how we may teach our value system to an AI, as soon as we figure out how to discover the correct research approach to investigating what our value system actually might be.
Clippy’s value system is clear, defined, easy to implement, achieve, and measure. It’s something most humans could very quickly become effective at maximizing, and that could therefore bring repeatable, tangible and durable success and satisfaction to almost all humans.
Shouldn’t that count for something?
Hey, whose side are you on anyway???
I’m wired for empathy toward human intelligence… Clippy is triggering this empathy. If you want to constrain AIs, you better do it before they start talking. That’s all I’m saying. :)
People aren’t good at explaining their values. Why is Clippy? Maybe Clippy should describe paper clips as “those things that make me happy”, then say that human pseudoclips “give him a bad vibe”. Wouldn’t that be a more accurate reflection of human morality?
Clippy, how can we get along?
What should humans do to be AI-friendly? For paperclip-maximizing AIs, and other “natural” (non-Friendly) AIs, what are the attributes that can make humans a valuable part of the utility function, so that AIs won’t pull the plug on us?
Or am I fooling myself?
Just have proper respect for paperclips. Make more of them, require that they be preserved when no longer in use, prohibit their unbending.
Allocate research efforts to improving the rate of paperclip production and related goods.
Simple, non-intrusive things like that.