-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Raemon received one Quirrell point on 16/4/2011, for his post
http://lesswrong.com/r/discussion/lw/59x/high_value_karma_vs_regular/
having inspired the idea of issuing Quirrell points on Less Wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG of some sort
iQIcBAEBAgAGBQJNqjzHAAoJEJVKvKyQdzsK/hMQAKlalx44MZT/7xkplZ6i5eC/
uRFz8fOWFeErxB0OYme32e8MQwgzxPjBCYrC+bEZ9cnMoMA0VSx9U+LhMKu+4PQM
7evQRZu0NL4iRwRjTZjs0Sug4GiWI/hGj8bjq/Ax1RfkI6Vg48PVSaWbWDpfPHks
EMqSVVIA24XAZZRAL2xzKVujyOA9JMu22ppBUuMqP8cTb1uXzhkLm+/IQ0HR+6tI
JEqeAMQ0WSJnIpw4T6xlcNUVNbTAunhTmZE0ZMCXYuuQrbnmAMdLa3DHJxkrSixT
zaMESi52XYRjo9spWH3MB0Gft81m1OrbiD6uSpIU9VvnPShasKXEs2GeRxhlylzR
4LUOSuBEkJHBleMB4T2tXm+9RNYcc8vYjnZj2DkD4QQF7AzOKUHD0BhpKtziLbRN
Do3wmJBFrRMHCKA7K8XJYeeBLP6kjmU7E7Wm5gxP5JTKTSlUjUFQJogHUth6a/3+
bGhVL2cArLrVJQTf/+qdvhBereX43V5cMwxUaWw81l26AfMlXHeENzpUV8hR9c8N
l6CprLJ5ew3Z9H70xnGTqD8GAktlEgVDLMkvGD4FD/3AT7S4v+o+9HxqiL6RGv9V
FaxCF4lhRX+LJ08VCESHYgQ9+ZIpRerOPmkeA/iex4p3UqUBl2dlXJdX+rPCS0rb
UNBJqPxxRRiJ0L+QCvXP
=Upbv
-----END PGP SIGNATURE-----
Abstract objects: Platonism or nominalism? Either, depending whether the second vowel has rising or falling intonation
Aesthetic value: objective or subjective? Subjective
Analytic-synthetic distinction: yes or no? Yes
Epistemic justification: internalism or externalism? Internalism
External world: idealism, skepticism, or non-skeptical realism? Skepticism. I haven’t been able to take realism seriously since I left Hogwarts.
Free will: compatibilism, libertarianism, or no free will? No free will
God: theism or atheism? Atheism
Knowledge: empiricism or rationalism? Empiricism
Knowledge claims: contextualism, relativism, or invariantism? Can be any of these, or something else entirely, depending on the specific knowledge and how it interacts with the relevant interdicts
Laws of nature: Humean or non-Humean? Non-Humean
Logic: classical or non-classical? Classical
Mental content: internalism or externalism? Internalism
Meta-ethics: moral realism or moral anti-realism? Moral anti-realism
Metaphilosophy: naturalism or non-naturalism? Naturalism
Mind: physicalism or non-physicalism? Depends whose mind we’re talking about
Moral judgment: cognitivism or non-cognitivism? Non-cognitivism
Moral motivation: internalism or externalism? Externalism
Newcomb’s problem: one box or two boxes? The first box twice
Normative ethics: deontology, consequentialism, or virtue ethics? Consequentialism
Perceptual experience: disjunctivism, qualia theory, representationalism, or sense-datum theory? Sense-datum theory
Personal identity: biological view, psychological view, or further-fact view? Further-fact view
Politics: communitarianism, egalitarianism, or libertarianism? Snicker
Proper names: Fregean or Millian? It’s more complicated than that (some names are pointers, some are independent entities, and some can even communicate in binary by agreeing or disagreeing with a sequence of pronouns)
Science: scientific realism or scientific anti-realism? Realism with some caveats
Teletransporter (new matter): survival or death? Survival
Time: A-theory or B-theory? Neither. These both claim the past and future are disjoint. What the fuck?
Trolley problem (five straight ahead, one on side track, turn requires switching, what ought one do?): switch or don’t switch? Varies depending on mood
Truth: correspondence, deflationary, or epistemic? Correspondence
Zombies: inconceivable, conceivable but not metaphysically possible, or metaphysically possible? Actual
I wouldn’t exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn’t develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.
You seem to be under the impression that Eliezer is going to create an artificial general intelligence, and oversight is necessary to ensure that he doesn’t create one which places his goals over humanity’s interests. It is important, you say, that he is not allowed unchecked power. This is all fine, except for one very important fact that you’ve missed.
Eliezer Yudkowsky can’t program. He’s never published a nontrivial piece of software, and doesn’t spend time coding. In the one way that matters, he’s a muggle. Ineligible to write an AI. Eliezer has not positioned himself to be the hero, the one who writes the AI or implements its utility function. The hero, if there is to be one, has not yet appeared on stage. No, Eliezer has positioned himself to be the mysterious old wizard—to lay out a path, and let someone else follow it. You want there to be oversight over Eliezer, and Eliezer wants to be the oversight over someone else to be determined.
But maybe we shouldn’t trust Eliezer to be the mysterious old wizard, either. If the hero/AI programmer comes to him with a seed AI, then he knows it exists, and finding out that a seed AI exists before it launches is the hardest part of any plan to steal it and rewrite its utility function to conquer the universe. That would be pretty evil, but would “transparency and oversight” make things turn out better, or worse? As far as I can tell, transparency would mean announcing the existence of a pre-launch AI to the world. This wouldn’t stop Eliezer from make a play to conquer the universe, but it would present that option to everybody else, including at least some people and organizations who are definitely evil.
So that’s a bad plan. A better plan would be to write a seed AI yourself, keep it secret from Eliezer, and when it’s time to launch, ask for my input instead.
I’d just like to point out that “anonymous” is a pre-existing term for all people who choose not to identify themselves, so any time a journalist says “anonymous” did something they are merely professing their own ignorance, regardless of whether the A is capitalized or not. That said, the term seems to have become popular among a particularly low-status sort of person, so I advise everyone to use pen names and explain their unidentifiability in complete sentences.
I’m curious what the marginal next best strategy is. I’m also curious why you would be interested in promoting the unmasking of users.
Not all users, just the few I happen to be curious about. And no, I won’t say anything more about what the marginal next-best strategy is other than that I’m immune to it too, and −1 Quirrell point for asking.
I have just realized that sitemeter has the following data published about my visit, in a searchable and browsable format:
Searchable my behind! I looked into what it would take to use this to, for example, unmask Clippy, and it was less usable than the marginal next-best strategy.
The world around us redounds with opportunities, explodes with opportunities, which nearly all folk ignore because it would require them to violate a habit of thought … I cannot quite comprehend what goes through people’s minds when they repeat the same failed strategy over and over, but apparently it is an astonishingly rare realization that you can try something else.
-- Eliezer Yudkowsky, putting words in my other copy’s mouth
You’re safeguarding against the wrong thing. If I needed to fake a prediction that badly, I’d find a security hole in Less Wrong with which to edit all your comments. I wouldn’t waste time establishing karma for sockpuppets to post editable hashes to deter others from posting hashes themselves, that would be silly. But as it happens, I’m not planning to edit this hash, and doing that wouldn’t have been a viable strategy in the first place.
When should you punish someone for a crime they will commit in the future?
Easy. When they can predict you well enough and they think you can predict them well enough that if you would-counterfactually punish them for committing a crime in the future, it influences the probability that they will commit the crime by enough to outweigh the cost of administering the punishment times the probability that you will have to do so. Or when you want to punish them for an unrelated reason and need a pretext.
Not every philosophical question needs to be complicated.
Someone as clever, powerful, and rich as yourself can likely find a collision if you get to choose both source texts (which is easier than finding a collision with one of the two inputs determined by someone else).
This is actually much harder than you’d think. A hash function is considered broken if any collision is found, but a mere collision is not sufficient; to be useful, a collision must have chosen properties. In the case of md5sum, it is possible to generate collisions between files which differ in a 128-byte aligned block, with the same prefix and suffix. This works well for any file format that is scriptable or de-facto scriptable—wrap the colliding block in a comparison statement, and behave differently depending on its result. However, even for md5sum, it is still impossible to generate a collision between plain-text files with two separate chosen texts; nor is it possible to generate collisions between files that have no random-seeming sections, or that have random sections that are too small, not block-aligned, or are drawn from a constrained alphabet. (Snowyowl’s joke would require a preimage attack, which is harder still, and which won’t be available at first even if sha1sum is broken, so he will not be able to fulfill his promise to reveal a message with that sha1sum.)
Anyways, since you asked, here are a few more hashes of the same thing. I didn’t bother with the SHA3 finalists, since they don’t seem to have made convenient command-line utilities yet and I don’t want to force people to fiddle too much to verify my hashes.
This issue came up on Less Wrong before, and I will reiterate the advice I gave there: if a forbidden criteria affects a hiring decision, keep your reasons secret and shred your work. The linked article is about a case where the University of Kentucky was forced to pay $125,000 to an applicant, Martin Gaskell. This happened because the chairman of the search committee, Michael Cavagnero, was stupid enough to write this in a logged email:
If Martin were not so superbly qualified, so breathtakingly above the other applicants in background and experience, then our decision would be much simpler. We could easily choose another applicant, and we could content ourselves with the idea that Martin’s religious beliefs played little role in our decision. However, this is not the case. As it is, no objective observer could possibly believe that we excluded Martin on any basis other than religious...
And that’s where the trouble starts, because Martin Gaskell’s religious beliefs would have been a serious risk to the university’s reputation. No one would take a creationist seriously as an astronomer, and no one would take an observatory seriously if one of the first few Google results for its name connected it to creationism.
Which is why, as soon as they realized they had a creationist as a potentially leading candidate, they should have moved their hiring process into private meetings with poor note-taking, and started looking for better pretexts. Yes, anti-discrimination laws are crazy, but not all judges are. However, a judge can only work around craziness if you allow a suitable pretext, which means not discussing how you need to break the crazy laws in writing.
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures.
Let’s not get too crazy; I’ve got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
In short, there most certainly ARE legal restrictions on building your office somewhere deliberately selected for it’s inaccessibility to those with a congenital inability to e.g. teleport,
The Americans with Disabilities Act limits what you can build (every building needs ramps and elevators), not where you can build it. Zoning laws are blacklist-based, not whitelist-based, so extradimensional spaces are fine. More commonly, you can easily find office space in locations that poor people can’t afford to live near. And in the unlikely event that race or national origin is the key factor, you get to choose which country or city’s demographics you want.
A lack of teleportation-specific case law would not work in your favor, given the judge’s access to statements you’ve already made.
This is the identity under which I speak freely and teach defense against the dark arts. This is not the identity under which I buy office buildings and hire minions. If it was, I wouldn’t be talking about hiring strategies.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.
Memory charms do have their uses. Unfortunately, they seem to only work in universes where minds are ontologically basic mental entities, and the potions available in this universe are not fast, reliable or selective enough to be adequate substitutes.
Quirinus_Quirrell
A priori knowledge: yes or no? Yes
Abstract objects: Platonism or nominalism? Either, depending whether the second vowel has rising or falling intonation
Aesthetic value: objective or subjective? Subjective
Analytic-synthetic distinction: yes or no? Yes
Epistemic justification: internalism or externalism? Internalism
External world: idealism, skepticism, or non-skeptical realism? Skepticism. I haven’t been able to take realism seriously since I left Hogwarts.
Free will: compatibilism, libertarianism, or no free will? No free will
God: theism or atheism? Atheism
Knowledge: empiricism or rationalism? Empiricism
Knowledge claims: contextualism, relativism, or invariantism? Can be any of these, or something else entirely, depending on the specific knowledge and how it interacts with the relevant interdicts
Laws of nature: Humean or non-Humean? Non-Humean
Logic: classical or non-classical? Classical
Mental content: internalism or externalism? Internalism
Meta-ethics: moral realism or moral anti-realism? Moral anti-realism
Metaphilosophy: naturalism or non-naturalism? Naturalism
Mind: physicalism or non-physicalism? Depends whose mind we’re talking about
Moral judgment: cognitivism or non-cognitivism? Non-cognitivism
Moral motivation: internalism or externalism? Externalism
Newcomb’s problem: one box or two boxes? The first box twice
Normative ethics: deontology, consequentialism, or virtue ethics? Consequentialism
Perceptual experience: disjunctivism, qualia theory, representationalism, or sense-datum theory? Sense-datum theory
Personal identity: biological view, psychological view, or further-fact view? Further-fact view
Politics: communitarianism, egalitarianism, or libertarianism? Snicker
Proper names: Fregean or Millian? It’s more complicated than that (some names are pointers, some are independent entities, and some can even communicate in binary by agreeing or disagreeing with a sequence of pronouns)
Science: scientific realism or scientific anti-realism? Realism with some caveats
Teletransporter (new matter): survival or death? Survival
Time: A-theory or B-theory? Neither. These both claim the past and future are disjoint. What the fuck?
Trolley problem (five straight ahead, one on side track, turn requires switching, what ought one do?): switch or don’t switch? Varies depending on mood
Truth: correspondence, deflationary, or epistemic? Correspondence
Zombies: inconceivable, conceivable but not metaphysically possible, or metaphysically possible? Actual
This comment is more likely if Silas is Clippy than if he isn’t.
I wouldn’t exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn’t develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.
You seem to be under the impression that Eliezer is going to create an artificial general intelligence, and oversight is necessary to ensure that he doesn’t create one which places his goals over humanity’s interests. It is important, you say, that he is not allowed unchecked power. This is all fine, except for one very important fact that you’ve missed.
Eliezer Yudkowsky can’t program. He’s never published a nontrivial piece of software, and doesn’t spend time coding. In the one way that matters, he’s a muggle. Ineligible to write an AI. Eliezer has not positioned himself to be the hero, the one who writes the AI or implements its utility function. The hero, if there is to be one, has not yet appeared on stage. No, Eliezer has positioned himself to be the mysterious old wizard—to lay out a path, and let someone else follow it. You want there to be oversight over Eliezer, and Eliezer wants to be the oversight over someone else to be determined.
But maybe we shouldn’t trust Eliezer to be the mysterious old wizard, either. If the hero/AI programmer comes to him with a seed AI, then he knows it exists, and finding out that a seed AI exists before it launches is the hardest part of any plan to steal it and rewrite its utility function to conquer the universe. That would be pretty evil, but would “transparency and oversight” make things turn out better, or worse? As far as I can tell, transparency would mean announcing the existence of a pre-launch AI to the world. This wouldn’t stop Eliezer from make a play to conquer the universe, but it would present that option to everybody else, including at least some people and organizations who are definitely evil.
So that’s a bad plan. A better plan would be to write a seed AI yourself, keep it secret from Eliezer, and when it’s time to launch, ask for my input instead.
I’d just like to point out that “anonymous” is a pre-existing term for all people who choose not to identify themselves, so any time a journalist says “anonymous” did something they are merely professing their own ignorance, regardless of whether the A is capitalized or not. That said, the term seems to have become popular among a particularly low-status sort of person, so I advise everyone to use pen names and explain their unidentifiability in complete sentences.
Not all users, just the few I happen to be curious about. And no, I won’t say anything more about what the marginal next-best strategy is other than that I’m immune to it too, and −1 Quirrell point for asking.
Searchable my behind! I looked into what it would take to use this to, for example, unmask Clippy, and it was less usable than the marginal next-best strategy.
-- Eliezer Yudkowsky, putting words in my other copy’s mouth
I voted on this and the immediate parent, but I won’t reveal why, or which direction, or how many times, or which account I used.
You’re safeguarding against the wrong thing. If I needed to fake a prediction that badly, I’d find a security hole in Less Wrong with which to edit all your comments. I wouldn’t waste time establishing karma for sockpuppets to post editable hashes to deter others from posting hashes themselves, that would be silly. But as it happens, I’m not planning to edit this hash, and doing that wouldn’t have been a viable strategy in the first place.
Easy. When they can predict you well enough and they think you can predict them well enough that if you would-counterfactually punish them for committing a crime in the future, it influences the probability that they will commit the crime by enough to outweigh the cost of administering the punishment times the probability that you will have to do so. Or when you want to punish them for an unrelated reason and need a pretext.
Not every philosophical question needs to be complicated.
This is actually much harder than you’d think. A hash function is considered broken if any collision is found, but a mere collision is not sufficient; to be useful, a collision must have chosen properties. In the case of md5sum, it is possible to generate collisions between files which differ in a 128-byte aligned block, with the same prefix and suffix. This works well for any file format that is scriptable or de-facto scriptable—wrap the colliding block in a comparison statement, and behave differently depending on its result. However, even for md5sum, it is still impossible to generate a collision between plain-text files with two separate chosen texts; nor is it possible to generate collisions between files that have no random-seeming sections, or that have random sections that are too small, not block-aligned, or are drawn from a constrained alphabet. (Snowyowl’s joke would require a preimage attack, which is harder still, and which won’t be available at first even if sha1sum is broken, so he will not be able to fulfill his promise to reveal a message with that sha1sum.)
Anyways, since you asked, here are a few more hashes of the same thing. I didn’t bother with the SHA3 finalists, since they don’t seem to have made convenient command-line utilities yet and I don’t want to force people to fiddle too much to verify my hashes.
sha512sum: 85cf46426d025843d6b0f11e3232380c6fac6cae88b66310ee8fbcd3f81722d08b2154c6388ecb1ee9cebc528e0f56e3be7a057cd67531cfda442febe0132418 sha384sum: 400d47bf97b6a3ccd662e0eb1268820c57d10e2a623c3a007b297cc697ed560862dda19b74638f92a3550fbbfe14d485 md5sum: 8fec2109c85f622580e1a78c9cabdab4
This issue came up on Less Wrong before, and I will reiterate the advice I gave there: if a forbidden criteria affects a hiring decision, keep your reasons secret and shred your work. The linked article is about a case where the University of Kentucky was forced to pay $125,000 to an applicant, Martin Gaskell. This happened because the chairman of the search committee, Michael Cavagnero, was stupid enough to write this in a logged email:
And that’s where the trouble starts, because Martin Gaskell’s religious beliefs would have been a serious risk to the university’s reputation. No one would take a creationist seriously as an astronomer, and no one would take an observatory seriously if one of the first few Google results for its name connected it to creationism.
Which is why, as soon as they realized they had a creationist as a potentially leading candidate, they should have moved their hiring process into private meetings with poor note-taking, and started looking for better pretexts. Yes, anti-discrimination laws are crazy, but not all judges are. However, a judge can only work around craziness if you allow a suitable pretext, which means not discussing how you need to break the crazy laws in writing.
Let’s not get too crazy; I’ve got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
Good idea. I’d vote at least once for this.
Hey, be patient. I haven’t been here very long, and building up power takes time.
The Americans with Disabilities Act limits what you can build (every building needs ramps and elevators), not where you can build it. Zoning laws are blacklist-based, not whitelist-based, so extradimensional spaces are fine. More commonly, you can easily find office space in locations that poor people can’t afford to live near. And in the unlikely event that race or national origin is the key factor, you get to choose which country or city’s demographics you want.
This is the identity under which I speak freely and teach defense against the dark arts. This is not the identity under which I buy office buildings and hire minions. If it was, I wouldn’t be talking about hiring strategies.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.
Memory charms do have their uses. Unfortunately, they seem to only work in universes where minds are ontologically basic mental entities, and the potions available in this universe are not fast, reliable or selective enough to be adequate substitutes.