What are you all most interested in?
Your solution to the “Four People Who Do Everything” organization problem. This will be immediately relevant to my responsibilities within the next couple months.
What are you all most interested in?
Your solution to the “Four People Who Do Everything” organization problem. This will be immediately relevant to my responsibilities within the next couple months.
I’m actually not making an accusation of overconfidence; just pointing out that using qualified language doesn’t protect against it. I would prefer language that gives (or at least suggests) probability estimates or degrees of confidence, rather than phrases like “looks like” or “many suggest”.
ID theorists are more likely than evolutionary biologists to use phrases like “looks like” or “many suggest” to defend their ideas, because those phrases hide the actual likelihood of ID. When I find myself thinking, “it could be that X,” instead of “because of A and B, X is likely,” I suspect myself of being overconfident, and I apply the same heuristic to statements from other people.
An exercise in parody:
The bacterial flagellum looks like a good candidate for an intelligently designed structure.
Many [non-biologist] researchers think Intelligent Design has explanatory value.
Many [non-biologist] researchers suggest Intelligent Design is scientifically useful.
Our brains may have been intelligently designed to...
but we may not have been designed to...
Evolutionary psychology isn’t as catastrophically implausible as ID; hence the bit about parody. The point is that merely using qualified language is no guarantee against overconfidence.
I’m not convinced that “offense” is a variety of “pain” in the first place. They feel to me like two different things.
When I imagine a scenario that hurts me without offending me (e.g. accidentally touching a hot stovetop), I anticipate feelings like pain response and distraction in the short term, fear in the medium term, and aversion in the long term.
When I imagine a scenario that offends me without hurting me (e.g. overhearing a slur against a group of which I’m not a member) I anticipate feelings like anger and urge-to-punish in the short term, wariness and distrust in the medium term, and invoking heavy status penalties or even fully disassociating myself from the offensive party in the long term.
Of course, an action can be both offensive and painful, like the anti-Semitic slurs you mention. But an offensive action need not be painful. My intuition suggests that this is a principled reason (as opposed to a practical one) for the general norm of pluralistic societies that offensiveness alone is not enough to constrain free speech.
I’m not sure which category the British Fish thought experiment falls into; the description doesn’t completely clarify whether the Britons are feeling pained or offended or both.
They’re a physical effect caused by the operation of a brain
You haven’t excluded a computational explanation of qualia by saying this. You haven’t even argued against it! Computations are physical phenomena that have meaningful consequences.
“Mental phenomena are a physical effect caused by the operation of a brain.”
“The image on my computer monitor is a physical effect caused by the operation of the computer.”
I’m starting to think you’re confused as a result of using language in a way that allows you to claim computations “don’t exist,” while qualia do.
As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it’s just me, but qualia don’t seem especially difficult to explain or understand. I don’t think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.
I didn’t intend to start a reductionist “race to the bottom,” only to point out that minds and computations clearly do exist. “Reducible” and “non-existent” aren’t synonyms!
Since you prefer the question in your edit, I’ll answer it directly:
if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that “embodying the same computation” is somehow a privileged concept in this regard—that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed—what is your justification for believing this?
Computation is “privileged” only in the sense that computationally identical substitutions leave my mind, preferences, qualia, etc. intact; because those things are themselves computations. If you replaced my brain with a computationally equivalent computer weighing two tons, I would certainly notice a difference and consider myself harmed. But the harm wouldn’t have been done to my mind.
I feel like there must be something we’ve missed, because I’m still not sure where exactly we disagree. I’m pretty sure you don’t think that qualia are reified in the brain—that a surgeon could go in with tongs and pull out a little lump of qualia—and I think you might even agree with the analogy that brains:hardware::minds:software. So if there’s still a disagreement to be had, what is it? If qualia and other mental phenomena are not computational, then what are they?
If computation doesn’t exist because it’s “a linguistic abstraction of things that exist within physics”, then CPUs, apples, oranges, qualia, “physical media” and people don’t exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don’t think this definition of existence is particularly useful in context.
As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I’m trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one.
See you in the morning! :)
“Computation exists within physics” is not equivalent to ” “2″ exists within physics.”
If computation doesn’t exist within physics, then we’re communicating supernaturally.
If qualia aren’t computations embodied in the physical substrate of a mind, then I don’t know what they are.
I’m asserting that qualia, reasoning, and other relevant phenomena that a brain produces are computational, and that by computing them, a Turing machine can reproduce them with perfect accuracy. I apologize if this was not clear.
Adding two and two is a computation. An abacus is one substrate on which addition can be performed; a computer is another.
I know what it means to compute “2+2” on an abacus. I know what it means to compute “2+2″ on a computer. I know what it means to simulate “2+2 on an abacus” on a computer. I even know what it means to simulate “2+2 on a computer” on an abacus (although I certainly wouldn’t want to have to actually do so!). I do not know what it means to simulate “2+2” on a computer.
the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator.… [to simulate humans] the simulator must physically incorporate a human brain.
It seems like the definition of “physical” used in this article is “existing within physics” (a perfectly reasonable definition). By this definition, phenomena such as qualia, reasoning, and computation are all “physical” and are referred to as such in the article itself.
Brains are physical, and local physics seems Turing-computable. Therefore, every phenomenon that a physical human brain can produce, can be produced by any Turing-complete computer, including human reasoning and qualia.
So to “physically incorporate a human brain” in the sense relative to this article, the simulator does NOT need to include an actual 3-pound blob of neurons exchanging electrochemical signals. It only needs to implement the same computation that a human brain implements.
http://en.wikipedia.org/wiki/Intentional_base_on_balls
Baseball pitchers have the option to ‘walk’ a batter, giving the other team a slight advantage but denying them the chance to gain a large advantage. Barry Bonds, a batter who holds the Major League Baseball record for home runs (a home run is a coup for the batter’s team), also holds the record for intentional walks. By walking Barry Bonds, the pitcher denies him a shot at a home run. In other words, Paige is advising other pitchers to walk a batter when it minimizes expected risk to do so.
Since this denies the batter the opportunity to even try to get a hit, some consider it to be unsportsmanlike, and when overused it makes a baseball game less interesting. A culture of good sportsmanship and interesting games are communal goods in baseball—the former keeps a spirit of goodwill, and the latter increases profitability—so at a stretch, you might say Paige advises defecting in Prisoner’s Dilemma type problems.
Other concepts that happen to also be termed “values”, such as your ancestors’ values, don’t say anything more about comparative goodness of the future-configurations, and if they do, then that is also part of your values.
I’m having difficulty understanding the relevance of this sentence. It sounds like you think I’m treating “my ancestors’ values” as a term in my own set of values, instead of a separate set of values that overlaps with mine in some respects.
My ancestors tried to steer their future away from economic systems that included money loaned at interest. They were unsuccessful, and that turned out to be fortunate; loaning money turned out to be economically valuable. If they had known in advance that loaning money would work out in everyone’s best interest, they would have updated their values (future-configuration preferences).
Of course, you could argue that neither of us really cared about loaning at interest; what we really cared about was a higher-level goal like a healthy economy. It would be convenient if we could establish a restate our values in a well-organized hierarchy, with a node at the top that was invariant on available information. But even if that could be done, which I doubt, it would still leave a role for available information in deciding something as concrete as a preferred future-configuration.
The problem with this logic is that my values are better than those of my ancestors. Of course I would say that, but it’s not just a matter of subjective judgment; I have better information on which to base my values. For example, my ancestors disapproved of lending money at interest, but if they could see how well loans work in the modern economy, I believe they’d change their minds.
It’s easy to see how concepts like MWI or cognitive computationalism affect one’s values when accepted. It’s likely bordering on certain that transhumans will have more insights of similar significance, so I hope that human values continue to change.
I suspect that both quoted authors are closer to that position than to endorsing or accepting random value drift.
Reading LessWrong is primarily a willpower restorer for me. I use the “hit” of insight I get from reading a high quality post or comment to motivate me to start Working (and it’s much easier to continue Working than to start). I save posts that I expect to be high quality (like Yvain’s latest) for just before I’m about to start Working. Occasionally the insight itself is useful, of course.
Commenting on LessWrong has raised my standards of quality for my own ideas, understanding them clearly, and expressing them concisely.
I don’t know if either of those are Work, but they’re both definitely Win.
New ideas are held to much higher standard than old ones… Behaviorists, Freudians, and Social Psychologists all had created their own theories of “ultimate causation” for human behavior. None of those theories would have stood up to the strenuous demands for experimental validation that Ev. psych endured.
I’m not sure what you mean. Are you saying that standards of evidence for new ideas are higher now than they have been in the past, or that people are generally biased in favor of older ideas over newer ones? Either claim interests me and I’d like a bit more explanation of whichever you intended.
In general, I think scientific hypotheses should invite “strenuous demands for experimental validation”, not endure them.
I agree (see, e.g., The Second Law of Thermodynamics, and Engines of Cognition for why this is the case). Unfortunately, I see this as a key inferential gap between people who are and aren’t trained in rationality.
The problem is that many people—dare I say most—feel no obligation to gather evidence for their intuitive feelings, or to let empirical evidence inform their feelings. They don’t think of intuitive feelings as predictions to be updated by Bayesian evidence; they treat their intuitive feelings as evidence.
It’s a common affair (at least in the United States) to see debaters use unsubstantiated intuitive feelings as linchpins of their arguments. It’s even common on internet debates to see whole chains of reasoning in which every link is supported by gut feeling alone. This style of argument is not only unpersuasive to anyone who doesn’t share those intuitions already—it prevents the debater from updating, as long as his intuitions don’t change.
‘Instinct,’ ‘intuition,’ ‘gut feeling,’ etc. are all close synonyms for ‘best guess.’ That’s why they tend to be the weakest links in an argument—they’re just guesses, and guesses are often wrong. Guessing is useful for brainstorming, but if you really believe something, you should have more concrete evidence than a guess. And the more you base a belief on guesses, the more likely that belief is to be wrong.
Substantiate your guesses with empirical evidence. Start with a guess, but end with a test.
Sure, but then the question becomes whether the other programmer got the program right...
My point is that if you don’t understand a situation, you can’t reliably write a good computer simulation of it. So if logical believes that (to use your first link) James Tauber is wrong about the Monty Hall problem, he has no reason to believe Tauber can program a good simulation of it. And even if he can read Python code, and has no problem with Tauber’s implementation, logical might well conclude that there was just some glitch in the code that he didn’t notice—which happens to programmers regrettably often.
I think implementing the game with a friend is the better option here, for ease of implementation and strength of evidence. That’s all :)
If—and I mean do mean if, I wouldn’t want to spoil the empirical test—logical doesn’t understand the situation well enough to predict the correct outcome, there’s a good chance he won’t be able to program it into a computer correctly regardless of his programming skill. He’ll program the computer to perform his misinterpretation of the problem, and it will return the result he expects.
On the other hand, if he’s right about the Monty Hall problem and he programs it correctly… it will still return the result he expects.
This is why you don’t eat silica gel.
I’m always mildly bemused by the use of quotation marks on these packets. I’ve always seen:
Why would the quotation actually be printed on the package? Who are they quoting?