Eric, it’s more amusing that both often cite a theorem that agreeing to disagree is impossible.
It’s only impossible for rational Bayesians, which neither Hanson nor Yudkowsky are. Or any other human beings, for that matter.
Eric, it’s more amusing that both often cite a theorem that agreeing to disagree is impossible.
It’s only impossible for rational Bayesians, which neither Hanson nor Yudkowsky are. Or any other human beings, for that matter.
Has anyone proved a theorem on the uselessness of randomness?
Clearly you don’t recognize the significance of Eliezer’s work. He cannot be bound by such trivialities as ‘proof’ or ‘demonstration’. They’re not part of the New Rationality.
Don’t you get the same effect from adding an orderly grid of dots?In that particular example, yes. Because the image is static, as is the static.
If the static could change over time, you could get a better sense of where the image lies. It’s cheaper and easier—and thus ‘better’ - to let natural randomness produce this static, especially since significant resources would have to be expended to eliminate the random noise.
What about from aligning the dots along the lines of the image?If we knew where the image was, we wouldn’t need the dots.
To be precise, in every case where the environment only cares about your actions and not what algorithm you use to produce them, any algorithm that can be improved by randomization can always be improved further by derandomization.It’s clear this is what you’re saying.
It is not clear this can be shown to be true. ‘Improvement’ depends on what is valued, and what the context permits. In the real world, the value of an algorithm depends on not only its abstract mathematical properties but the costs of implementing it in an environment for which we have only imperfect knowledge.
Caledonian: Yes, I did. So: can’t you always do better in principle by increasing sensitivity?That’s a little bit like saying that you could in principle go faster than light if you ignore relativistic effects, or that you could in principle produce a demonstration within a logical system that it is consistent if you ignore Godel’s Fork.
There are lots of things we can do in principle if we ignore the fact that reality limits the principles that are valid.
As the saying goes: the difference between ‘in principle’ and ‘in practice’ is that in principle there is no difference between them, and in practice, there is.
If you remove the limitations on the amount and kind of knowledge you can acquire, randomness is inferior to the unrandom. But you can’t remove those limitations.
Caledonian: couldn’t you always do better in such a case, in principle (ignoring resource limits), by increasing resolution?
I double-checked the concept of ‘optical resolution’ on Wikipedia.Resolution is (roughly speaking) the ability to distinguish two dots that are close-together as different—the closer the dots can be and still distinguished, the higher the resolution, and the greater detail that can be perceived.
I think perhaps you mean ‘sensitivity’. It’s the ability to detect weak signals close to perceptual threshold that noise improves, not the detail.
But it is an inherently odd proposition that you can get a better picture of the environment by adding noise to your sensory information—by deliberately throwing away your sensory acuity. This can only degrade the mutual information between yourself and the environment. It can only diminish what in principle can be extracted from the data.
It is certainly counterintuitive to think that, by adding noise, you can get more out of data. But it is nevertheless true.
Every detection system has a perceptual threshold, a level of stimulation needed for it to register a signal. If the system is mostly noise-free, this threshold is a ’sharp’ transition. If the system has a lot of noise, the theshold is ‘fuzzy’. The noise present at one moment might destructively interact with the signal, reducing its strength, or constructively interact, making it stronger. The result is that the threshold becomes an average; it is no longer possible to know whether the system will respond merely by considering the strength of the signal.
When dealing with a signal that is just below the threshold, a noiseless system won’t be able to perceive it at all. But a noisy system will pick out some of it—some of the time, the noise and the weak signal will add together in such a way that the result is strong enough for the system to react to it positively.
You can see this effect demonstrated at science museums. If an image is printed very, very faintly on white paper, just at the human threshold for visual detection, you can stare right at the paper and not see what’s there. But if the same image is printed onto paper on which a random pattern of grey dots has also been printed, we can suddenly perceive some of it—and extrapolate the whole from the random parts we can see. We are very good at extracting data from noisy systems, but only if we can perceive the data in the first place. The noise makes it possible to detect the data carried by weak signals.
When trying to make out faint signals, static can be beneficial. Which is why biological organisms introduce noise into their detection physiologies—a fact which surprised biologists when they first learned of it.
Foraging animals make the same ‘mistake’: given two territories in which to forage, one of which has a much more plentiful resource and is far more likely to reward an investment of effort and time with a payoff, the obvious strategy is to only forage in the richer territory; however, animals instead split their time between the two spaces as the relative probability of a successful return.
In other words, if one territory is twice as likely to produce food through foraging as the other, animals spend twice as much time there: 2/3rds of their time in the richer territory, 1/3rd of their time in the poorer. Similar patterns hold when there are more than two foraging territories involved.
Although this results in a short-term reduction in food acquisition, it’s been shown that this strategy minimizes the chances of exploiting the resource to local extinction, and ensures that the sudden loss of one territory for some reason (blight of the resource, natural diaster, predation threats, etc.) doesn’t result in a total inability to find food.
The strategy is highly adaptive in its original context. The problem with humans that we retain our evolved, adaptive behaviors long after the context changes to make them non- or even mal-adaptive.
I would suggest taking a hard look at the elements of your social support network, and trying to determine which would sever their links with you if they knew you were not a Christian.
I do not agree that you are compelled not to lie to people. Truth is a valuable thing, and shouldn’t be wasted on those unworthy of it.
Consider that Carl Sagan’s protagonist in “Contact”, Ellie Arroway, claimed to be a Christian, despite being an atheist. Look carefully at the arguments she offered regarding that claim, and see if they can be adapted to your life.
I would recommend that you refuse to claim beliefs that you do not hold, or participate in actions that suggest you believe those things. Reciting the Creed if you do not accept it is out. Taking Communion if you reject the beliefs that form the basis of fellowship in your church is out. So on and so forth. Don’t go to confession if you don’t believe you need to confess. Etc. etc.
It is impossible to determine whether something was well-designed without speculating as to its intended function. Bombs are machines, machines whose function is to fly apart; they generally do not last particularly long when they are used. Does that make them poorly-made?
If the purpose of a collection of gears was to fly apart and transmit force that way, sticking together would be a sign of bad design. Saying that the gears must have been well-designed because they stick together is speculating as to their intended function.
I do not see what is gained by labeling blind entropy-increasing processes as ‘intelligence’, nor do I see any way in which we can magically infer quality design without having criteria by which to judge configurations.
There is no way to tell that something is made by ‘intelligence’ merely by looking at it—it takes an extensive collection of knowledge about its environment to determine whether something is likely to have arisen through simple processes.
A pile of garbage seems obviously unnatural to us only because we know a lot about Earth nature. Even so, it’s not a machine. Aliens concluding that it is a machine with an unknown purpose would be mistaken.
I see that the sentence noting how this line of argument comes dangerously close to the Watchmaker Argument for God has been edited out.
Why? If it’s a bad point, it merely makes me look bad. If it’s a good point, what’s gained by removing it?
Z.M., I agree with your analysis up to the point where you suggest that rational agents act to preserve their current value system.
It may be useful to consider why we have value systems in the first place. When we know why we do a thing, we can evaluate how well we do it, but not until then.
I have no idea what the machine is doing. I don’t even have a hypothesis as to what it’s doing. Yet I have recognized the machine as the product of an alien intelligence.
Are beaches the product of an alien intelligence? Some of them are—the ones artificially constructed and maintained by humans. What about the ‘naturally-occurring’ ones, constructed and maintained by entropy? Are they evidence for intelligence? Those grains of sand don’t wear down, and they’re often close to spherical. Would a visiting UFO pause in awe to recognize beaches as machines with unknown purposes?
Z.M., I agree with your analysis up to the point where you suggest that rational agents act to preserve their current value system.
I suggest that it may be useful for you to consider what the purpose of a value system is. When trying to decide between two value systems, a rational agent must evaluate them in some way. Is there an impersonal and objective set of criteria for evaluation?
Suppose I landed on an alien planet and discovered what seemed to be a highly sophisticated machine, all gleaming chrome as the stereotype demands. Can I recognize this machine as being in any sense well-designed, if I have no idea what the machine is intended to accomplish? I have no idea what the machine is doing. I don’t even have a hypothesis as to what it’s doing. Yet I have recognized the machine as the product of an alien intelligence.
Carefully, Eliezer. You are very, very close to simply restating the Watchmaker Argument in favor of the existence of a Divine Being.
You have NOT recognized the machine as the product of an alien intelligence. You most certainly have not been able to identify the machine as ‘well-designed’.
You can’t escape the temptation to lie to people just by having them not pay you in money. There are other forms of payment, of renumeration, besides money.
In fact, if you care about anything involving people or capable of being affected by them in some way, there can always arise situations in which you could maximize some of your goals or preferences by deceiving them.
There are only a few goals or preferences that change this—chief among them, the desire to get what you want without deception. If you possess those goals or preferences in a dominant form, there’s no temptation. If you don’t, there’s also no temptation, because you have no objection.
‘Temptation’ only arises when the preference for doing things one way is not stably dominant over not doing things that way.
Personally, I’m doing it mainly because everyone else is (stop laughing, it’s an important heuristic that should only be overridden when you have a definite reason).Most smart people I know think that “because everyone else does it” IS a definite reason.
Information and education should be freeWhy? People don’t value what they get for free. Education was once valued very highly by the common folk in America. That changed once education began to be provided as a right, and children were obliged to go to school instead of its being a sacrifice on the family’s part.
That’s very selfishYou say that like it’s a bad thing. I am neither a Randian nor a libertarian, but comments like yours push me closer to that line every day.
But a vote for a losing candidate is not “thrown away”; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote.
Such actions send a lot of messages. I have no confidence in the ability of politicians to determine what I would be trying to convey or the effectiveness of my attempting to do so.
Besides, the point is trivial. A vote for a losing candidate isn’t thrown away because the vote almost certainly couldn’t have been used productively in the first place—you lose little by casting it for the candidate you prefer, just as you’d lose little by casting it for any of the ones you didn’t.
Not voting also sends messages to politicians and your fellow citizens. It is not obvious that they are worse than the ones you’d send by voting.
He quickly appears to conclude that he cannot really discuss any issues with EY because they don’t even share the same premises.So they should establish what premises they DO share, and from that base, determine why they hold the different beliefs that they do.
I find it unlikely that they don’t share any premises at all. Their ability to communicate anything, albeit strictly limited, indicates that’s there’s common ground of a sort.
If Robin knows that Eliezer believes there is a good likelihood that Eliezer’s position is wrong, why would Robin then conclude that his own position is likely to be wrong? And vice versa?
The fact that Eliezer and Robin disagree indicates one of two things: either one possesses crucial information that the other does not, or at least one of the two have made a fatal error.
The disagreement stems from the fact that each believes the other to have made the fatal error, and that their own position is fundamentally sound.