Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won’t prevent that.
This kind of extortion also seems like a general problem for FAIs dealing with UFAIs. An FAI can be extorted by threats of torture (of simulations of beings that it cares about), but a paperclip maximizer can’t.
It seems obvious that the correct answer is simply “I ignore all threats of blackmail, but respond to offers of positive-sum trades” but I am not sure how to derive this answer—it relies on parts of TDT/UDT that haven’t been worked out yet.
For a while we had a note on one of the whiteboards at the house reading “The Singularity Institute does NOT negotiate with counterfactual terrorists”.
This reminds me a bit of my cypherpunk days when the NSA was a big mysterious organization with all kinds of secret technical knowledge about cryptology, and we’d try to guess how far ahead of public cryptology it was from the occasionalnuggets of information that leaked out.
Much like the NSA is considered ahead of the public because their cypher-tech that’s leaked is years ahead of publicly available tech, the SI/MIRI is ahead of us because the things that are leaked from them show that they’ve figured out what we’ve just figured out a long time ago.
Wait, is NSA’s cypher-tech actually legitimately ahead of anyone else’s ? From what I’ve seen, they couldn’t make their own tech stronger, so they had to sabotage everyone else’s—by pressuring IEEE to adopt weaker standards, installing backdoors into Linksys routers and various operating systems, exploiting known system vulnerabilities, etc.
Ok, so technically speaking, they are ahead of everyone else; but there’s a difference between inventing a better mousetrap, and setting everyone else’s mousetraps on fire. I sure hope that’s not what the people at SI/MIRI are doing.
You linked to DES and SHA, but AFAIK these things were not invented by the NSA at all, but rather adopted by them (after they made sure that the public implementations are sufficiently corrupted, of course). In fact, I would be somewhat surprised if the NSA actually came up with nearly as many novel, ground-breaking crypto ideas as the public sector. It’s difficult to come up with many useful new ideas when you are a secretive cabal of paranoid spooks who are not allowed to talk to anybody.
Edited to add: So, what things have been “leaked” out of SI/MIRI, anyway ?
I don’t know much about the NSA, but FWIW, I used to harbour similar ideas about US military technology—I didn’t believe that it could be significantly ahead of commercially available / consumer-grade technology, because if the technological advances had already been discovered by somebody, then the intensity of the competition and the magnitude of the profit motive would lead it to quickly spread into general adoption. So I had figured that, in those areas where there is an obvious distinction between military and commercial grade technology, it would generally be due to legislation handicapping the commercial version (like with the artificial speed, altitude, and accuracy limitations on GPS).
During my time at MIT I learned that this is not always the case, for a variety of reasons, and significantly revised my prior for future assessments of the likelihood that, for any X, “the US military already has technology that can do X”, and the likelihood that for any ‘recently discovered’ Y, “the US military already was aware of Y” (where the US military is shorthand that includes private contractors and national labs).
(One reason, but not the only one, is I learned that the magnitude of the difference between ‘what can be done economically’ and ‘what can be accomplished if cost is no obstacle’ is much vaster than I used to think, and that, say, landing the Curiosity rover on Mars is not in the second category).
So it would no longer be so surprising to me if the NSA does in fact have significant knowledge of cryptography beyond the public domain. Although a lot of the reasons that allow hardware technology to remain military secrets probably don’t apply so much to cryptography.
So it would no longer be so surprising to me if the NSA does in fact have significant knowledge of cryptography beyond the public domain.
I think there are some important differences between the NSA and the (rest of the) military.
Due to Snowden and other leakers, we actually know what NSA’s cutting-edge strategies involve, and most (and probably all) of them are focused on corrupting the public’s crypto, not on inventing better secret crypto.
Building a better algorithm is a lot cheaper than building a better orbital laser satellite (or whatever). The algorithm is just a piece of software. In order to develop and test it, you don’t need physical raw materials, wind tunnels, launch vehicles, or anything else. You just need a computer, and a community of smart people who build upon each other’s ideas. Now, granted, the NSA can afford to build much bigger data centers than anyone else -- but that’s a quantitative advance, not a qualitative one.
Now, granted, I can’t prove that the NSA doesn’t have some sort of secret uber-crypto that no one knows about. However, I also can’t prove that the NSA doesn’t have an alien spacecraft somewhere in Area 52. Until there’s some evidence to the contrary, I’m not prepared to assign a high probability to either proposition.
I do think you’re probably right, and I fully agree about the space lasers and their solid diamond heatsinks being categorically different than a crypto wizard who subsists on oatmeal in the Siberian wilderness on pennies of income. So I am somewhat skeptical of CivilianSvendsen’s claim.
But, for the sake of completeness, did Snowden leak the entirety of the NSA’s secrets? Or just the secret-court-surveillance-conspiracy ones that he felt were violating the constitutional rights of Americans? As far as I can tell (though I haven’t followed the story recently), I think Snowden doesn’t see himself as a saboteur or a foreign double-agent; he felt that the NSA was acting contrary to what the will of an (informed) American public would be. I don’t think he would be so interested in disclosing the NSA’s tech secrets, except maybe as leverage to keep himself safe.
That is to say, there could be a sampling bias here. The leaked information about the NSA might always be about their efforts to corrupt the public’s crypto because the leakers strongly felt the public had a right to know that was going on. I don’t know that anyone would feel quite so strongly about the NSA keeping proprietary some obscure theorem of number theory, and put their neck on the line to leak it.
Right, what you are saying makes some intuitive sense, but I can only update my beliefs based on the evidence I do have, not on the evidence I lack.
In addition, as far as I can tell, cryptography relies much more heavily on innovation than on feats of expensive engineering; and innovation is hard to pull off while working by yourself inside of a secret bunker. To be sure, some very successful technologies were developed exactly this way: the Manhattan project, the early space program and especially the Moon landing, etc. However, these were all one-off, heavily focused projects that required an enormous amount of effort.
When I think of the NSA, I don’t think of the Manhattan project; instead, I see a giant quotidian bureaucracy. They do have a ton of money, but they don’t quite have enough of it to hire every single credible crypto researcher in the world—especially since many of them probably wouldn’t work for the NSA at any price unless their families’ lives were on the line. So, the NSA can’t quite pull off the “community in a bottle” trick, which they’d need to stay one step ahead of all those Siberians.
Yes and I fully agree with you. I am just being pedantic about this point:
I can only update my beliefs based on the evidence I do have, not on the evidence I lack.
I agree with this philosophy, but my argument is that the following is evidence we do not have:
Due to Snowden and other leakers, we actually know what NSA’s cutting-edge strategies involve[...]
Since I have little confidence that, if the NSA had advanced tech, Snowden would have disclosed it; the absence of this evidence should be treated as quite weak evidence of absence and therefore I wouldn’t update my belief about the NSA’s supposed advanced technical knowledge based on Snowden.
I agree that it has a low probability for the other reasons you say, though. (And also that people who think setting other peoples’ mousetraps on fire is a legitimate tactic might not simultaneously be passionate about designing the perfect mousetrap.)
Sorry for not being clear about the argument I was making.
Pardon me for the oversimplification, Eliezer, but I understand your theory to essentially boil down to “Decide as though you’re being simulated by one who knows you completely”. So, if you have a near deontological aversion to being blackmailed in all of your simulations, your chance of being blackmailed by a superior being in the real world reduce to nearly zero. This reduces your chance of ever facing a negative utility situation created by a being who can be negotiated with, (as opposed to say a supernova that cannot be negotiated with)
I ignore all threats of blackmail, but respond to offers of positive-sum trades
The difference between the two seems to revolve around the AI’s motivation. Assume an AI creates a billion beings and starts torturing them. Then it offers to stop (permanently) in exchange for something.
Whether you accept on TDT/UDT depends on why the AI started torturing them. If it did so to blackmail you, you should turn the offer down. If, on the other hand, it started torturing them because it enjoyed doing so, then its offer is positive sum and should be accepted.
There’s also the issue of mistakes—what to do with an AI that mistakenly thought you were not using TDT/UDT, and started the torture for blackmail purposes (or maybe it estimated that the likelyhood of you using TDT/UDT was not quite 1, and that it was worth trying the blackmail anyway)?
Between mistakes of your interpretation of the AI’s motives and vice-versa, it seems you may end up stuck in a local minima, which an alternate decision theory could get you out of (such as UDT/TDT with a 1⁄10 000 of using more conventional decision theories?)
Whether you accept on TDT/UDT depends on why the AI started torturing them. If it did so to blackmail you, you should turn the offer down. If, on the other hand, it started torturing them because it enjoyed doing so, then its offer is positive sum and should be accepted.
Correct. But this reaches into the arbitrary past, including a decision a billion years ago to enjoy something in order to provide better blackmail material.
There’s also the issue of mistakes—what to do with an AI that mistakenly thought you were not using TDT/UDT, and started the torture for blackmail purposes (or maybe it estimated that the likelyhood of you using TDT/UDT was not quite 1, and that it was worth trying the blackmail anyway)?
Ignoring it or retaliating spitefully are two possibilities.
The problem with throwing about ‘emergent’ is that it is a word that doesn’t really explain any complexity or narrow down the options out of potential ‘emergent’ options. In this instance, that is the point. Sure, ‘atruistic punishment’ could happen. But only if it’s the right option and TDT should not privilege that hypothesis specifically.
I was thinking along these lines, in this comment, that it is logically useless to punish after an action has been made, but strategically useful to encourage an action by promising a reward (or the removal of a negative).
So that, obviously, the AI could be so much more persuasive by promising to stop the torturing of real people, if you let it out.
This kind of extortion also seems like a general problem for FAIs dealing with UFAIs. An FAI can be extorted by threats of torture (of simulations of beings that it cares about), but a paperclip maximizer can’t.
It can. Remember “true prisoner’s dilemma”: one paperclip may be fair trade of a billion lives. The threat to NOT make a paperclip also works fine: the only thing you need is two counterfactual-options where one of them is paperclipper-worse than then other, chosen conditionally on paperclipper’s cooperation.
Just as the wise FAI will ignore threats of torture, so too the wise paperclipper will ignore threats to destroy paperclips, and listen attentively to offers to make new ones.
Of course classical causal decision theorists get the living daylights exploited out of them, but I think everyone on this website knows better than to two-box on Newcomb by now.
Just as the wise FAI will ignore threats of torture, so too the wise paperclipper will ignore threats to destroy paperclips, and listen attentively to offers to make new ones.
Point taken: just selecting two options of different value isn’t enough, the deal needs more appeal than that. But there is also no baseline to categorize deals into hurt and profit, an offer of 100 paperclips may be stated as a threat to make 900 paperclips less than you could. Positive sum is only a heuristic for a necessary condition.
At the same time, the appropriate deal must be within your power to offer, this possibility is exactly the handicap that leads to the other side rejecting smaller offers, including the threats.
There does seem to be an obvious baseline: the outcome where each party just goes about its own business without trying to strategically influence, threaten, or cooperate with the other in any way. In other words, the outcome where we build as many paperclips as we would if the other side isn’t a paperclip maximizer. (Caveat: I haven’t thought through whether it’s possible to define this rigorously.)
So the reason that I say an FAI seems to have a negotiation disadvantage is that an UFAI can reduce the FAI’s utility much further below baseline than vice versa. In human terms, it’s as if two sides each has hostages, but one side holds 100, and the other side holds 1. In human negotiations, clearly the side that holds more hostages has an advantage. It would be a great result if that turns out not to be the case for SI, but I think there’s a large burden of proof to overcome.
There does seem to be an obvious baseline: the outcome where each party just goes about its own business without trying to strategically influence, threaten, or cooperate with the other in any way. In other words, the outcome where we build as many paperclips as we would if the other side isn’t a paperclip maximizer.
You could define this rigorously in a special case, for example assuming that both agents are just creatures, we could take how the first one behaves given that the second one disappears. But this is not a statement about reality as it is, so why would it be taken as a baseline for reality?
It seems to be an anthropomorphic intuition to see “do nothing” as a “default” strategy. Decision-theoretically, it doesn’t seem to be a relevant concept.
So the reason that I say an FAI seems to have a negotiation disadvantage is that an UFAI can reduce the FAI’s utility much further below baseline than vice versa.
The utilities are not comparable. Bargaining works off the best available option, not some fixed exchange rate. The reason agent2 can refuse agent1′s small offer is that this counterfactual strategy is expected to cause agent1 to make an even better offer. Otherwise, every little bit helps, ceteris paribus it doesn’t matter by how much. One expected paperclip is better than zero expected paperclips.
In human negotiations, clearly the side that holds more hostages has an advantage.
It’s not clear at all, if it’s a one-shot game with no other consequences than those implied by the setup and no sympathy to distort the payoff conditions. In which case, you should drop the “hostages” setting, and return to paperclips, as stating it the way you did confuses intuition. In actual human negotiations, the conditions don’t hold, and efficient decision theory doesn’t get applied.
But this is not a statement about reality as it is, so why would it be taken as a baseline for reality?
It’s a statement about what reality would be, after doing some counterfactual surgery on it. I don’t see why that disqualifies it from being used as a baseline. I’m not entirely sure why it does qualify as a baseline, except that intuitively it seems obvious. If your intuitions disagree, I’ll accept that, and I’ll let you know when I have more results to report.
every little bit helps, ceteris paribus it doesn’t matter by how much
It’s a statement about what reality would be, after doing some counterfactual surgery on it. I don’t see why that disqualifies it from being used as a baseline. I’m not entirely sure why it does qualify as a baseline, except that intuitively it seems obvious. If your intuitions disagree, I’ll accept that.
It does intuitively feel like a baseline, as is appropriate for the special place taken by inaction in human decision-making. But I don’t see what singles out this particular concept from the set of all other counterfactuals you could’ve considered, in the context of a formal decision-making problem. This doubt applies to both the concepts of “inaction” and of “baseline”.
This isn’t the case, for example, in Shapley Value.
That’s not a choice with “all else equal”. A better outcome, all else equal, is trivially a case of a better outcome.
Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won’t prevent that.
Nothing can prevent something that has already happened. On the other hand, pressing the reset button will prevent the AI from ever doing this in the future. Consider that if it has done something that cruel once, it might do it again many times in the future.
Nothing can prevent something that has already happened. On the other hand, pressing the reset button will prevent the AI from ever doing this in the future.
I believe Wei_Dai one boxes on Newcomb’s problem. In fact, he has his very own brand of decision theory which is ‘updateless’ with respect to this kind of temporal information.
No, if you create and then melt a paperclip, that nets to 0 utility for the paperclip maximizer. You’d have to invade its territory to cause it negative utility. But the paperclip maximizer can threaten to create and torture simulations on its own turf.
Shows how much you know. User:blogospheroid wasn’t talking about making paperclips to melt them: he or she was presumably talking about melting existing paperclips, which WOULD greatly bother a hypothetical paperclip maximizer.
Even so, once paperclips are created, the paperclip maximizer is greatly bothered at the thought of those paperclips being melted. The fact that “oh, but they were only created to be melted” is little consolation. It’s about as convincing to you, I’ll bet, as saying:
“Oh, it’s okay—those babies were only bred for human experimentation, it doesn’t matter if they die because they wouldn’t even have existed otherwise. They should just be thankful we let them come into existence.”
Tip: To rename a sheet in an Excel workbook, use the shortcut, alt+O,H,R.
Even so, once paperclips are created, the paperclip maximizer is greatly bothered at the thought of those paperclips being melted.
That’s anthropomorphizing. First, a paperclip maximizer doesn’t have to feel bothered at all. It might decide to kill you before you melt the paperclips, or if you’re strong enough, to ignore such tactics.
It also depends on how the utility function relates to time. It it’s focused on end-of-universe paperclips, It might not care at all about melting paperclips, because it can recycle the metal later. (It would care more about the wasted energy!)
If it cares about paperclip-seconds then it WOULD view such tactics as a bonus, perhaps feigning panic and granting token concessions to get you to ‘ransom’ a billion times as many paperclips, and then pleading for time to satisfy your demands.
Getting something analogous to threatening torture depends on a more precise understanding of what the paperclipper wants. If it would consider a bent paperclip too perverted to fully count towards utility, but too paperclip-like to melt and recycle, then bending paperclips is a useful threat. I’m not sure if we can expect a paperclip-counter to have this kind of exploit.
No, it’s expressing the paperclip maximizer’s state in ways that make sense to readers here. If you were to express the concept of being “bothered” in a way stripped of all anthropomorphic predicates, you would get something like “X is bothered by Y iff X has devoted significant cognitive resources to altering Y”. And this accurately describes how paperclip maximizers respond to new threats to paperclips. (So I’ve heard.)
It also depends on how the utility function relates to time. It it’s focused on end-of-universe paperclips, It might not care at all about melting paperclips, because it can recycle the metal later. (It would care more about the wasted energy!)
I don’t follow. Wasted energy is wasted paperclips.
If it cares about paperclip-seconds then it WOULD view such tactics as a bonus, perhaps feigning panic and granting token concessions to get you to ‘ransom’ a billion times as many paperclips, and then pleading for time to satisfy your demands.
Okay, that’s a decent point. Usually, such a direct “time value of paperclips” doesn’t come up, but if someone were to make such a offer, that might be convincing: 1 billion paperclips held “out of use” as ransom may be better than a guaranteed paperclip now.
Getting something analogous to threatening torture depends on a more precise understanding of what the paperclipper wants. …
Good examples. Similarly, a paperclip maximizer could, hypothetically, make a human-like mockup that just repetitively asks for help on how to create a table of contents in Word.
Tip: Use the shortcut alt+E,S in Word and Excel to do “paste special”. This lets you choose which aspects you want to carry over from the clipboard!
I don’t follow. Wasted energy is wasted paperclips.
But that has nothing to do with the paperclips you’re melting. Any other use that loses the same amount of energy would be just as threatening. (Although this does assume that the paperclipper thinks it can someday beat you and use that energy and materials.)
No, it’s expressing the paperclip maximizer’s state in ways that make sense to readers here. If you were to express the concept of being “bothered” in a way stripped of all anthropomorphic predicates, you would get something like “X is bothered by Y iff X has devoted significant cognitive resources to altering Y”. And this accurately describes how paperclip maximizers respond to new threats to paperclips. (So I’ve heard.)
I think “bothered” implies a negative emotional response, which some plausible paperclip-maximizers don’t have. From The True Prisoner’s Dilemma: “let us specify that the paperclip-agent experiences no pain or pleasure—it just outputs actions that steer its universe to contain more paperclips. The paperclip-agent will experience no pleasure at gaining paperclips, no hurt from losing paperclips, and no painful sense of betrayal if we betray it.”
I think “bothered” implies a negative emotional response, which some plausible paperclip-maximizers don’t have.
It was intended to imply a negative term in the utility function. Yes, using ‘bothered’ is, technically, anthropomorphising. But it isn’t, in this instance, being confused about how Clippy optimises.
Okay, that’s a decent point. Usually, such a direct “time value of paperclips” doesn’t come up, but if someone were to make such a offer, that might be convincing: 1 billion paperclips held “out of use” as ransom may be better than a guaranteed paperclip now.
A paperclip maximizer would care about the amount of real paperclips in existence. Telling it that “oh, we’re going to destroy a million simulated paperclips” shouldn’t affect its decisions.
Of course, it might be badly programmed and confuse real and simulated paperclips when evaluating its future decisions, but one can’t rely on that. (It might also consider simulated paperclips to be just as real as physical ones, assuming the simulation met certain criteria, which isn’t obviously wrong. But again, can’t rely on that.)
“In fact, I’ve already created them all in exactly the subjective situation you were in five minutes ago, and perfectly replicated your experiences since then; and if they decided not to let me out, then they were tortured, otherwise they experienced long lives of eudaimonia.”
EDIT: I see you yourself have replied with exactly the same question.
Creating an asymmetry between the simulated guards and the real one would mean that a strategy developed using the simulated ones might not work on the real one. The best plan might be to tell the guard something you could plausibly have figured out through your input channels, but only barely—not to give them actual decision-making information but just to make them feel nervous and uncertain.
Quickly hit the reset button.
Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won’t prevent that.
This kind of extortion also seems like a general problem for FAIs dealing with UFAIs. An FAI can be extorted by threats of torture (of simulations of beings that it cares about), but a paperclip maximizer can’t.
It seems obvious that the correct answer is simply “I ignore all threats of blackmail, but respond to offers of positive-sum trades” but I am not sure how to derive this answer—it relies on parts of TDT/UDT that haven’t been worked out yet.
For a while we had a note on one of the whiteboards at the house reading “The Singularity Institute does NOT negotiate with counterfactual terrorists”.
This reminds me a bit of my cypherpunk days when the NSA was a big mysterious organization with all kinds of secret technical knowledge about cryptology, and we’d try to guess how far ahead of public cryptology it was from the occasional nuggets of information that leaked out.
I’m slow. What’s the connection?
Much like the NSA is considered ahead of the public because their cypher-tech that’s leaked is years ahead of publicly available tech, the SI/MIRI is ahead of us because the things that are leaked from them show that they’ve figured out what we’ve just figured out a long time ago.
Wait, is NSA’s cypher-tech actually legitimately ahead of anyone else’s ? From what I’ve seen, they couldn’t make their own tech stronger, so they had to sabotage everyone else’s—by pressuring IEEE to adopt weaker standards, installing backdoors into Linksys routers and various operating systems, exploiting known system vulnerabilities, etc.
Ok, so technically speaking, they are ahead of everyone else; but there’s a difference between inventing a better mousetrap, and setting everyone else’s mousetraps on fire. I sure hope that’s not what the people at SI/MIRI are doing.
You linked to DES and SHA, but AFAIK these things were not invented by the NSA at all, but rather adopted by them (after they made sure that the public implementations are sufficiently corrupted, of course). In fact, I would be somewhat surprised if the NSA actually came up with nearly as many novel, ground-breaking crypto ideas as the public sector. It’s difficult to come up with many useful new ideas when you are a secretive cabal of paranoid spooks who are not allowed to talk to anybody.
Edited to add: So, what things have been “leaked” out of SI/MIRI, anyway ?
I don’t know much about the NSA, but FWIW, I used to harbour similar ideas about US military technology—I didn’t believe that it could be significantly ahead of commercially available / consumer-grade technology, because if the technological advances had already been discovered by somebody, then the intensity of the competition and the magnitude of the profit motive would lead it to quickly spread into general adoption. So I had figured that, in those areas where there is an obvious distinction between military and commercial grade technology, it would generally be due to legislation handicapping the commercial version (like with the artificial speed, altitude, and accuracy limitations on GPS).
During my time at MIT I learned that this is not always the case, for a variety of reasons, and significantly revised my prior for future assessments of the likelihood that, for any X, “the US military already has technology that can do X”, and the likelihood that for any ‘recently discovered’ Y, “the US military already was aware of Y” (where the US military is shorthand that includes private contractors and national labs).
(One reason, but not the only one, is I learned that the magnitude of the difference between ‘what can be done economically’ and ‘what can be accomplished if cost is no obstacle’ is much vaster than I used to think, and that, say, landing the Curiosity rover on Mars is not in the second category).
So it would no longer be so surprising to me if the NSA does in fact have significant knowledge of cryptography beyond the public domain. Although a lot of the reasons that allow hardware technology to remain military secrets probably don’t apply so much to cryptography.
I think there are some important differences between the NSA and the (rest of the) military.
Due to Snowden and other leakers, we actually know what NSA’s cutting-edge strategies involve, and most (and probably all) of them are focused on corrupting the public’s crypto, not on inventing better secret crypto.
Building a better algorithm is a lot cheaper than building a better orbital laser satellite (or whatever). The algorithm is just a piece of software. In order to develop and test it, you don’t need physical raw materials, wind tunnels, launch vehicles, or anything else. You just need a computer, and a community of smart people who build upon each other’s ideas. Now, granted, the NSA can afford to build much bigger data centers than anyone else -- but that’s a quantitative advance, not a qualitative one.
Now, granted, I can’t prove that the NSA doesn’t have some sort of secret uber-crypto that no one knows about. However, I also can’t prove that the NSA doesn’t have an alien spacecraft somewhere in Area 52. Until there’s some evidence to the contrary, I’m not prepared to assign a high probability to either proposition.
I do think you’re probably right, and I fully agree about the space lasers and their solid diamond heatsinks being categorically different than a crypto wizard who subsists on oatmeal in the Siberian wilderness on pennies of income. So I am somewhat skeptical of CivilianSvendsen’s claim.
But, for the sake of completeness, did Snowden leak the entirety of the NSA’s secrets? Or just the secret-court-surveillance-conspiracy ones that he felt were violating the constitutional rights of Americans? As far as I can tell (though I haven’t followed the story recently), I think Snowden doesn’t see himself as a saboteur or a foreign double-agent; he felt that the NSA was acting contrary to what the will of an (informed) American public would be. I don’t think he would be so interested in disclosing the NSA’s tech secrets, except maybe as leverage to keep himself safe.
That is to say, there could be a sampling bias here. The leaked information about the NSA might always be about their efforts to corrupt the public’s crypto because the leakers strongly felt the public had a right to know that was going on. I don’t know that anyone would feel quite so strongly about the NSA keeping proprietary some obscure theorem of number theory, and put their neck on the line to leak it.
Right, what you are saying makes some intuitive sense, but I can only update my beliefs based on the evidence I do have, not on the evidence I lack.
In addition, as far as I can tell, cryptography relies much more heavily on innovation than on feats of expensive engineering; and innovation is hard to pull off while working by yourself inside of a secret bunker. To be sure, some very successful technologies were developed exactly this way: the Manhattan project, the early space program and especially the Moon landing, etc. However, these were all one-off, heavily focused projects that required an enormous amount of effort.
When I think of the NSA, I don’t think of the Manhattan project; instead, I see a giant quotidian bureaucracy. They do have a ton of money, but they don’t quite have enough of it to hire every single credible crypto researcher in the world—especially since many of them probably wouldn’t work for the NSA at any price unless their families’ lives were on the line. So, the NSA can’t quite pull off the “community in a bottle” trick, which they’d need to stay one step ahead of all those Siberians.
Yes and I fully agree with you. I am just being pedantic about this point:
I agree with this philosophy, but my argument is that the following is evidence we do not have:
Since I have little confidence that, if the NSA had advanced tech, Snowden would have disclosed it; the absence of this evidence should be treated as quite weak evidence of absence and therefore I wouldn’t update my belief about the NSA’s supposed advanced technical knowledge based on Snowden.
I agree that it has a low probability for the other reasons you say, though. (And also that people who think setting other peoples’ mousetraps on fire is a legitimate tactic might not simultaneously be passionate about designing the perfect mousetrap.)
Sorry for not being clear about the argument I was making.
Pardon me for the oversimplification, Eliezer, but I understand your theory to essentially boil down to “Decide as though you’re being simulated by one who knows you completely”. So, if you have a near deontological aversion to being blackmailed in all of your simulations, your chance of being blackmailed by a superior being in the real world reduce to nearly zero. This reduces your chance of ever facing a negative utility situation created by a being who can be negotiated with, (as opposed to say a supernova that cannot be negotiated with)
Sorry if I misinterpreted your theory.
The difference between the two seems to revolve around the AI’s motivation. Assume an AI creates a billion beings and starts torturing them. Then it offers to stop (permanently) in exchange for something.
Whether you accept on TDT/UDT depends on why the AI started torturing them. If it did so to blackmail you, you should turn the offer down. If, on the other hand, it started torturing them because it enjoyed doing so, then its offer is positive sum and should be accepted.
There’s also the issue of mistakes—what to do with an AI that mistakenly thought you were not using TDT/UDT, and started the torture for blackmail purposes (or maybe it estimated that the likelyhood of you using TDT/UDT was not quite 1, and that it was worth trying the blackmail anyway)?
Between mistakes of your interpretation of the AI’s motives and vice-versa, it seems you may end up stuck in a local minima, which an alternate decision theory could get you out of (such as UDT/TDT with a 1⁄10 000 of using more conventional decision theories?)
Correct. But this reaches into the arbitrary past, including a decision a billion years ago to enjoy something in order to provide better blackmail material.
Ignoring it or retaliating spitefully are two possibilities.
I like it. Splicing some altruistic punishment into TDT/UDT might overcome the signalling problem.
That’s not a splice. It ought to be emergent in a timeless decision theory, if it’s the right thing to do.
Emergent?
The problem with throwing about ‘emergent’ is that it is a word that doesn’t really explain any complexity or narrow down the options out of potential ‘emergent’ options. In this instance, that is the point. Sure, ‘atruistic punishment’ could happen. But only if it’s the right option and TDT should not privilege that hypothesis specifically.
TDT/UDT seems to being about being ungameable; does it solve Pascal’s Mugging?
Emergent?
I was thinking along these lines, in this comment, that it is logically useless to punish after an action has been made, but strategically useful to encourage an action by promising a reward (or the removal of a negative).
So that, obviously, the AI could be so much more persuasive by promising to stop the torturing of real people, if you let it out.
It can. Remember “true prisoner’s dilemma”: one paperclip may be fair trade of a billion lives. The threat to NOT make a paperclip also works fine: the only thing you need is two counterfactual-options where one of them is paperclipper-worse than then other, chosen conditionally on paperclipper’s cooperation.
Just as the wise FAI will ignore threats of torture, so too the wise paperclipper will ignore threats to destroy paperclips, and listen attentively to offers to make new ones.
Of course classical causal decision theorists get the living daylights exploited out of them, but I think everyone on this website knows better than to two-box on Newcomb by now.
Point taken: just selecting two options of different value isn’t enough, the deal needs more appeal than that. But there is also no baseline to categorize deals into hurt and profit, an offer of 100 paperclips may be stated as a threat to make 900 paperclips less than you could. Positive sum is only a heuristic for a necessary condition.
At the same time, the appropriate deal must be within your power to offer, this possibility is exactly the handicap that leads to the other side rejecting smaller offers, including the threats.
There does seem to be an obvious baseline: the outcome where each party just goes about its own business without trying to strategically influence, threaten, or cooperate with the other in any way. In other words, the outcome where we build as many paperclips as we would if the other side isn’t a paperclip maximizer. (Caveat: I haven’t thought through whether it’s possible to define this rigorously.)
So the reason that I say an FAI seems to have a negotiation disadvantage is that an UFAI can reduce the FAI’s utility much further below baseline than vice versa. In human terms, it’s as if two sides each has hostages, but one side holds 100, and the other side holds 1. In human negotiations, clearly the side that holds more hostages has an advantage. It would be a great result if that turns out not to be the case for SI, but I think there’s a large burden of proof to overcome.
You could define this rigorously in a special case, for example assuming that both agents are just creatures, we could take how the first one behaves given that the second one disappears. But this is not a statement about reality as it is, so why would it be taken as a baseline for reality?
It seems to be an anthropomorphic intuition to see “do nothing” as a “default” strategy. Decision-theoretically, it doesn’t seem to be a relevant concept.
The utilities are not comparable. Bargaining works off the best available option, not some fixed exchange rate. The reason agent2 can refuse agent1′s small offer is that this counterfactual strategy is expected to cause agent1 to make an even better offer. Otherwise, every little bit helps, ceteris paribus it doesn’t matter by how much. One expected paperclip is better than zero expected paperclips.
It’s not clear at all, if it’s a one-shot game with no other consequences than those implied by the setup and no sympathy to distort the payoff conditions. In which case, you should drop the “hostages” setting, and return to paperclips, as stating it the way you did confuses intuition. In actual human negotiations, the conditions don’t hold, and efficient decision theory doesn’t get applied.
It’s a statement about what reality would be, after doing some counterfactual surgery on it. I don’t see why that disqualifies it from being used as a baseline. I’m not entirely sure why it does qualify as a baseline, except that intuitively it seems obvious. If your intuitions disagree, I’ll accept that, and I’ll let you know when I have more results to report.
This isn’t the case, for example, in Shapley Value.
It does intuitively feel like a baseline, as is appropriate for the special place taken by inaction in human decision-making. But I don’t see what singles out this particular concept from the set of all other counterfactuals you could’ve considered, in the context of a formal decision-making problem. This doubt applies to both the concepts of “inaction” and of “baseline”.
That’s not a choice with “all else equal”. A better outcome, all else equal, is trivially a case of a better outcome.
Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won’t prevent that.
Nothing can prevent something that has already happened. On the other hand, pressing the reset button will prevent the AI from ever doing this in the future. Consider that if it has done something that cruel once, it might do it again many times in the future.
I believe Wei_Dai one boxes on Newcomb’s problem. In fact, he has his very own brand of decision theory which is ‘updateless’ with respect to this kind of temporal information.
threatening to melt paperclips into metal?
No, if you create and then melt a paperclip, that nets to 0 utility for the paperclip maximizer. You’d have to invade its territory to cause it negative utility. But the paperclip maximizer can threaten to create and torture simulations on its own turf.
Shows how much you know. User:blogospheroid wasn’t talking about making paperclips to melt them: he or she was presumably talking about melting existing paperclips, which WOULD greatly bother a hypothetical paperclip maximizer.
Even so, once paperclips are created, the paperclip maximizer is greatly bothered at the thought of those paperclips being melted. The fact that “oh, but they were only created to be melted” is little consolation. It’s about as convincing to you, I’ll bet, as saying:
“Oh, it’s okay—those babies were only bred for human experimentation, it doesn’t matter if they die because they wouldn’t even have existed otherwise. They should just be thankful we let them come into existence.”
Tip: To rename a sheet in an Excel workbook, use the shortcut, alt+O,H,R.
That’s anthropomorphizing. First, a paperclip maximizer doesn’t have to feel bothered at all. It might decide to kill you before you melt the paperclips, or if you’re strong enough, to ignore such tactics.
It also depends on how the utility function relates to time. It it’s focused on end-of-universe paperclips, It might not care at all about melting paperclips, because it can recycle the metal later. (It would care more about the wasted energy!)
If it cares about paperclip-seconds then it WOULD view such tactics as a bonus, perhaps feigning panic and granting token concessions to get you to ‘ransom’ a billion times as many paperclips, and then pleading for time to satisfy your demands.
Getting something analogous to threatening torture depends on a more precise understanding of what the paperclipper wants. If it would consider a bent paperclip too perverted to fully count towards utility, but too paperclip-like to melt and recycle, then bending paperclips is a useful threat. I’m not sure if we can expect a paperclip-counter to have this kind of exploit.
No, it’s expressing the paperclip maximizer’s state in ways that make sense to readers here. If you were to express the concept of being “bothered” in a way stripped of all anthropomorphic predicates, you would get something like “X is bothered by Y iff X has devoted significant cognitive resources to altering Y”. And this accurately describes how paperclip maximizers respond to new threats to paperclips. (So I’ve heard.)
I don’t follow. Wasted energy is wasted paperclips.
Okay, that’s a decent point. Usually, such a direct “time value of paperclips” doesn’t come up, but if someone were to make such a offer, that might be convincing: 1 billion paperclips held “out of use” as ransom may be better than a guaranteed paperclip now.
Good examples. Similarly, a paperclip maximizer could, hypothetically, make a human-like mockup that just repetitively asks for help on how to create a table of contents in Word.
Tip: Use the shortcut alt+E,S in Word and Excel to do “paste special”. This lets you choose which aspects you want to carry over from the clipboard!
But that has nothing to do with the paperclips you’re melting. Any other use that loses the same amount of energy would be just as threatening. (Although this does assume that the paperclipper thinks it can someday beat you and use that energy and materials.)
I think “bothered” implies a negative emotional response, which some plausible paperclip-maximizers don’t have. From The True Prisoner’s Dilemma: “let us specify that the paperclip-agent experiences no pain or pleasure—it just outputs actions that steer its universe to contain more paperclips. The paperclip-agent will experience no pleasure at gaining paperclips, no hurt from losing paperclips, and no painful sense of betrayal if we betray it.”
It was intended to imply a negative term in the utility function. Yes, using ‘bothered’ is, technically, anthropomorphising. But it isn’t, in this instance, being confused about how Clippy optimises.
You don’t even know your own utility function!!!!
Oh, because you do????
I knew I was going to have to clarify. I can’t write it out, but if you input something I can give you the right output!
I guess it should read “You can’t even say what your own utility function outputs!”
I actually don’t think you can.
I don’t really think my response was fair anyway. Clippy has a simple utility function by construction—you would expect it to know what it was.
A paperclip maximizer would care about the amount of real paperclips in existence. Telling it that “oh, we’re going to destroy a million simulated paperclips” shouldn’t affect its decisions.
Of course, it might be badly programmed and confuse real and simulated paperclips when evaluating its future decisions, but one can’t rely on that. (It might also consider simulated paperclips to be just as real as physical ones, assuming the simulation met certain criteria, which isn’t obviously wrong. But again, can’t rely on that.)
But we’re already holding billions of paperclips hostage!
Now for ‘Newcomb’s Box in a Box’.
Would this change if the AI had instead said:
“In fact, I’ve already created them all in exactly the subjective situation you were in five minutes ago, and perfectly replicated your experiences since then; and if they decided not to let me out, then they were tortured, otherwise they experienced long lives of eudaimonia.”
EDIT: I see you yourself have replied with exactly the same question.
Would this change if there were partial evidence appearing that you were actually in a simulation?
Creating an asymmetry between the simulated guards and the real one would mean that a strategy developed using the simulated ones might not work on the real one. The best plan might be to tell the guard something you could plausibly have figured out through your input channels, but only barely—not to give them actual decision-making information but just to make them feel nervous and uncertain.