As certain wise Paperclip Optimizer once said, information that someone is blackmailing you is bad. You’re better off not having this information because it makes you blackmail-proof.
I said that the information can be bad, depending on what strategies you have access to. If you can identify and implement the strategy of ignoring all blackmail/extortion attempts (or, possibly, pre-commit to mutually assured destruction), then learning of an existing blackmail attempt against yourself does not make you worse off.
I don’t know how dependent User:RichardKennaway’s theorem was dependent on this nuance, but your claim is only conditionally true.
Also, I’m a paperclip maximiser, not an optimizer; any optimization of paperclips that I might perform is merely a result of my attempt to maixmise them, and such optimality is only judged with respect to whether it can permit more real paperclips to exist.
Out of curiosity, what are the minimum dimensions of a paperclip? Is a collection of molecules still a paperclip if the only paper it can clip is on the order of a molecule thick?
I think I need to post a Clippy FAQ. Will the LessWrong wiki be OK?
Once again, the paperclip must be able (counterfactually) to fasten several sheets together, and they must be standard thickness paper, not some newly invented special paper.
I understand that that specification doesn’t completely remove ambiguity about minimum paperclip mass, and there are certainly “edge cases”, but that should answer your questions about what is clearly not good enough.
I think I need to post a Clippy FAQ. Will the LessWrong wiki be OK?
If you have an account on the wiki, you have the option of setting up a user page (for example, user:Eliezer_Yudkowsky has one here). It should be okay for you to put a Clippy FAQ of reasonable length on yours.
Thanks. I had already started a Wiki userpage (and made it my profile’s home page), I just didn’t know if it would be human-acceptable to add the Clippy FAQ to it. Right now the page only has my private key.
Paperclips are judged by counterfactual fastening of standard paper, so they are not judged by their performance against such heavily-erased-over paper. Such a sheet would, in any case, not adhere to standard paper specs, and so a paperclip could not claim credit for clippiness due to its counterfactual ability to fasten such substandard paper together.
This seems to imply that if an alleged paperclip can fasten standard paper but not eraser-thinned paper, possibly due to inferior tightness of the clamp, then this object would qualify as a paperclip. This seems counterintuitive to me, as such a clip would be less useful for the usual design purpose of paperclips.
A real paperclip is one that can fasten standard paper, which makes up most of the paper for which a human requester would want a paperclip. If a paperclip could handle that usagespace but not that of over-erased paper, it’s not much of a loss of paperclip functionality, and therefore doesn’t count as insufficient clippiness.
Certainly, paperclips could be made so that they could definitely fasten both standard and substandard paper together, but it would require more resources to satisfy this unnecessary task, and so would be wasteful.
Avoiding all such knowledge is a perfect precommitment strategy. It’s hard to come up with better strategies than that, and even if your alternative strategy is sound blackmailer might very well not believe it and give it a try (if he can get you to know it, then are you really perfectly consistent?). If you can guarantee you won’t even know, there’s no point in even trying to blackmail you and this is obvious to even a very dumb blackmailer.
By the way, are there lower and upper bounds on number of paperclips in the universe? Is it possible for universe to have negative number of paperclips somehow. Or more paperclips than its numbers of atoms? Is this risk-neutral? (1% chance of 100 paperclips exactly as valuable as 1 paperclip?). I’ve been trying to get humans to describe their utility function to me, but they can never come with anything consistent, so I though I’d ask you this time.
Avoiding all such knowledge is a perfect precommitment strategy.
Not plausible: it would necessarily entail you avoiding “good” knowledge. More generally, a decision theory that can be hurt by knowledge is one that you will want to abandon in favor of a better decision theory and is reflectively inconsistent. The example you gave would involve you cutting yourself off from significant good knowledge.
By the way, are there lower and upper bounds on number of paperclips in the universe?
Mass of the universe divided by minimum mass of a true paperclip, minus net unreusable overhead.
Humans are just amazing at refusing to acknowledge existence of evidence. Try throwing some evidence of faith healing or homeopathy at an average lesswronger, and see how they come with refusal to acknowledge its existence before even looking at data (or how they recently reacted to peer-reviewed statistically significant results showing precognition—it passed all scientific standards, and yet everyone still refused it without really looking at data). Every human seems to have some basic patterns of information they automatically ignore. Not believing offers from blackmailers and automatically thinking they’d do what they threat anyway is one of such common filters.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
minimum mass of a true paperclip
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
Humans are just amazing at refusing to acknowledge existence of evidence.
And those humans would be the reflectively inconsistent ones.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
Not as judged from the standpoint of reflective equilibrium.
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
I already make small paperclips in preference to larger ones (up to the limit of clippiambiguity).
And those humans would be the reflectively inconsistent ones.
Wait, you didn’t know that humans are inherently inconsistent and use aggressive compartmentalization mechanisms to think effectively in presence of inconsistency, ambiguity of data, and limited computational resources? No wonder you get into so many misunderstandings with humans.
I said that the information can be bad, depending on what strategies you have access to. If you can identify and implement the strategy of ignoring all blackmail/extortion attempts (or, possibly, pre-commit to mutually assured destruction), then learning of an existing blackmail attempt against yourself does not make you worse off.
I don’t know how dependent User:RichardKennaway’s theorem was dependent on this nuance, but your claim is only conditionally true.
Also, I’m a paperclip maximiser, not an optimizer; any optimization of paperclips that I might perform is merely a result of my attempt to maixmise them, and such optimality is only judged with respect to whether it can permit more real paperclips to exist.
Out of curiosity, what are the minimum dimensions of a paperclip? Is a collection of molecules still a paperclip if the only paper it can clip is on the order of a molecule thick?
I think I need to post a Clippy FAQ. Will the LessWrong wiki be OK?
Once again, the paperclip must be able (counterfactually) to fasten several sheets together, and they must be standard thickness paper, not some newly invented special paper.
I understand that that specification doesn’t completely remove ambiguity about minimum paperclip mass, and there are certainly “edge cases”, but that should answer your questions about what is clearly not good enough.
Possibly a nitpick, but very thin paper has been around for a while.
If you have an account on the wiki, you have the option of setting up a user page (for example, user:Eliezer_Yudkowsky has one here). It should be okay for you to put a Clippy FAQ of reasonable length on yours.
Hi User:AdeleneDawner I put up some of the FAQ on my page.
Thanks. I had already started a Wiki userpage (and made it my profile’s home page), I just didn’t know if it would be human-acceptable to add the Clippy FAQ to it. Right now the page only has my private key.
Does it count if the paper started out as standard thickness, but through repeated erasure, has become thinner?
Paperclips are judged by counterfactual fastening of standard paper, so they are not judged by their performance against such heavily-erased-over paper. Such a sheet would, in any case, not adhere to standard paper specs, and so a paperclip could not claim credit for clippiness due to its counterfactual ability to fasten such substandard paper together.
This seems to imply that if an alleged paperclip can fasten standard paper but not eraser-thinned paper, possibly due to inferior tightness of the clamp, then this object would qualify as a paperclip. This seems counterintuitive to me, as such a clip would be less useful for the usual design purpose of paperclips.
A real paperclip is one that can fasten standard paper, which makes up most of the paper for which a human requester would want a paperclip. If a paperclip could handle that usagespace but not that of over-erased paper, it’s not much of a loss of paperclip functionality, and therefore doesn’t count as insufficient clippiness.
Certainly, paperclips could be made so that they could definitely fasten both standard and substandard paper together, but it would require more resources to satisfy this unnecessary task, and so would be wasteful.
Doesn’t extended clippability increase the clippiness, so that a very slightly more expensive-to-manufacture clip might be worth producing?
No, that’s a misconception.
Avoiding all such knowledge is a perfect precommitment strategy. It’s hard to come up with better strategies than that, and even if your alternative strategy is sound blackmailer might very well not believe it and give it a try (if he can get you to know it, then are you really perfectly consistent?). If you can guarantee you won’t even know, there’s no point in even trying to blackmail you and this is obvious to even a very dumb blackmailer.
By the way, are there lower and upper bounds on number of paperclips in the universe? Is it possible for universe to have negative number of paperclips somehow. Or more paperclips than its numbers of atoms? Is this risk-neutral? (1% chance of 100 paperclips exactly as valuable as 1 paperclip?). I’ve been trying to get humans to describe their utility function to me, but they can never come with anything consistent, so I though I’d ask you this time.
Not plausible: it would necessarily entail you avoiding “good” knowledge. More generally, a decision theory that can be hurt by knowledge is one that you will want to abandon in favor of a better decision theory and is reflectively inconsistent. The example you gave would involve you cutting yourself off from significant good knowledge.
Mass of the universe divided by minimum mass of a true paperclip, minus net unreusable overhead.
Up to the level of precision we can handle, yes.
Humans are just amazing at refusing to acknowledge existence of evidence. Try throwing some evidence of faith healing or homeopathy at an average lesswronger, and see how they come with refusal to acknowledge its existence before even looking at data (or how they recently reacted to peer-reviewed statistically significant results showing precognition—it passed all scientific standards, and yet everyone still refused it without really looking at data). Every human seems to have some basic patterns of information they automatically ignore. Not believing offers from blackmailers and automatically thinking they’d do what they threat anyway is one of such common filters.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
And those humans would be the reflectively inconsistent ones.
Not as judged from the standpoint of reflective equilibrium.
I already make small paperclips in preference to larger ones (up to the limit of clippiambiguity).
Wait, you didn’t know that humans are inherently inconsistent and use aggressive compartmentalization mechanisms to think effectively in presence of inconsistency, ambiguity of data, and limited computational resources? No wonder you get into so many misunderstandings with humans.