BAD: While the venue was a good choice for several reasons (especially for group bonding), one of its downsides was that it was somewhat loud. Conversation was still possible for most of us, but there was a hearing impaired LessWronger in attendance who was unfortunately unable to participate in any group conversations. And while it’s not always possible to accommodate everyone, it seems that a quieter venue for future meetings would not only benefit him, but facilitate communication for everyone and increase the maximum conversational group size.
Yasser_Elassal
The bathtub was supposed to illustrate the collective property notion, not the status-quo notion.
Well that clears things up then. I realize you never included the word “further”, but I had to insert it in order to use your bathtub example to interpret the status quo notion in any meaningful way.
Assuming that had been your intent, the implied reductio was very much part of my point. I didn’t think you would want the factory to continue dumping waste, which is why I thought your argument about “status quo” was flawed.
But since you’ve clarified your position, I lift that particular objection.
Having reread your comments with the context of that clarification, I now understand what you meant and I sort of agree, with caveats.
If there is no clear winner among the possible states of affairs in consideration, then it makes sense to default to the state of affairs that requires no action. And I agree that future humans have rights insofar as it isn’t fair to “use up” nature in the present, leaving future generations with polluted wastelands.
However, I don’t think that uncertainty about the preferences of future humans should leave us unable to make changes to the current state of nature.
messing with nature is going to be stealing it from somebody who was entitled to its being left alone.
This may be true, but if we collectively think in the present that some change is a generally good idea overall, we shouldn’t maintain the status quo just because we’re worried that people in the future might disagree and want nature left alone. We should guess at what their preferences will be and take that into account so that we can move forward.
Otherwise, we’d never be able to change anything about nature that we don’t like.
You’re either ignoring “absent human action” or taking it to mean something wildly different from what I had in mind.
I took it to mean “absent further human action”, which I thought was the only coherent way to interpret your post. (If that’s not what you meant, then please forgive the rant.)
If what you really meant was “absent human action at all” (i.e. just nature), then in your original example about koi, the “natural” status quo would not have been no-koi-in-bathtub, but instead no-bathtub-at-all.
So the only way I could make sense of your example was to assume that you were assigning special status to “no further action” such that it was more relevant to the question of what to do with the bathtub than comparing the utilities of “being able to shower” and “having pet koi” in order to optimize for fairness.
I’m not saying that I think your position is that the status quo is always better. That would be a silly straw man. I’m just saying that privileging the status quo is a form of anchoring that can make people resist change even when they’d consider the new state of affairs to be “more fair” than the old state of affairs, had they not been anchored.
In my example about discovering the bathtub home to koi, “no further action” would have left the koi in place. The misleading advertising had already happened. It would take further action to find the koi a new home.
In my example about the slaveowner being confronted by abolitionists, “no further action” would have kept the slave enslaved. The slave had already been bought “fair and square” according to the rules at the time. The status quo was legal slavery. Abolition is what needed further action.
Am I completely missing your point? If so, by what interpretation of “status quo” was your original koi example relevant?
I privilege the status quo
I wholeheartedly disagree with this mentality, and I think it’s one of the major hindrances to the righting of social injustice. When people feel like they’re entitled to “the way things are”, it’s difficult for them to notice when the status quo is unfair in a way that benefits them at the expense of others.
In your example about the koi fish in the bathtub, the no-koi-containing state of affairs doesn’t win out because it’s the status quo, but because the disutility of not being able to shower (where there was a reasonable expectation prior to renting of being able to shower) outweighs the utility of having koi fish. If you had used Craigslist to rent a room abroad with a shared bathroom and you discovered upon arriving that there were koi fish in the only bathtub, I doubt you’d consider “the koi fish have always been there so let’s not intervene” to be particularly fair, especially given your expectations when you arranged for the room. The situation can be assessed without privileging the current state of affairs.
As a particularly extreme historical example of status quo privileging, if you were a white man in 18th century America and you worked hard, you could have earned enough money to buy a slave. And you might have felt entitled to that slave because you played fairly according to the rules of the status quo. So if someone came along and argued that even though you followed the rules, it’s not actually fair for you to own a slave because the rules themselves were unfair, you might disagree. In fact, you might argue that it would be unfair to you if the rules were changed after you followed them so obediently.
However, a few hundred years later, it’s obvious to us that slavery was unfair, even if slaveowners disagreed. The slaveowners’ disutility should certainly be taken into account when optimizing for fairness, but it shouldn’t get some special “status quo” multiplier in society’s utility function. The status quo deserves no special privileges because it’s simply one of the many possible states of affairs.
Unfortunately, the tendency to privilege the status quo permeates our modern politics.
I expect that a few hundred years from now, it will be obvious to everyone that it’s unfair for an economic system to fail to provide adequate health care as compensation for any full-time contribution to society, even though many people currently feel entitled to the benefit of the higher after-tax purchasing power that they’re provided by the status quo at the expense of the uninsured working class.
My response was to Christian’s implication that a rationality program isn’t necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.
In your response, you asked about emotions that produce behaviors advantageous to the agent’s goals, which is rational behavior, not irrational behavior as was stipulated in Christian’s post.
If those emotions are part of an otherwise bug-free rationality program that outputs rational beliefs and behaviors, then there is no bug. And that’s what it means for an emotion to be correlated with reality, precisely because there are no XML tags mapping certain neural spike patterns (i.e. emotions) to the state of reality.
Emotions aren’t beliefs about the world that can be verified by looking at the territory. Emotions are threads within the running program that maps and traverses the territory, so the only thing it can mean for them to correlate with reality is that they don’t cause the program to malfunction.
What I was trying to point out to Christian is that emotions are part of the system, not outside of it. So if the system produces irrational behavior, then the system as a whole is irrational, even if some of the subroutines are rational in isolation.
The irrationality of the emotions don’t somehow cancel out with the irrationality of the outputs to make the whole system rational.
I suppose you’re saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn’t be considered a bug because the trade-off is necessary for survival in a fast-paced world.
I might disagree, but then we’d just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word “bug” for such situations, we could say that an imprecise algorithm is necessary because of a “hardware limitation” of the brain.
However, so long as there are more precise algorithms that can run on the same hardware (debiasing techniques), I would still consider the inferior algorithm to be “craziness”.
An emotion that doesn’t correlate with reality is itself a bug. Sure, it may not be easy to fix (or even possible without brain-hacking), but it’s a bug in the human source code nonetheless.
To extend the analogy, it’s like a bug in the operating system. If that low-level bug causes a higher-level program to malfunction, you can still blame “buggy code” even if the higher-level program itself is bug-free.
To use your analogy. Any person who doesn’t provide the expected output is often deemed crazy… It doesn’t mean that there is a bug in the person, perhaps sometimes it’s a bug in reality.
In the context of my analogy, it’s nonsense to say that reality can have bugs.
I suppose you meant that sometimes the majority of people can share the same bug, which causes them to “deem” that someone who lacks the bug (and outputs accordingly) is crazy.
But there’s still an actual territory that each program either does or does not map properly, regardless of society’s current most popular map. So it’s meaningful to define “craziness” in terms of the actual territory, even if it’s occassionaly difficult to determine whether 1 person is crazy or “everyone else” is.
Stupidity is the lack of mental horsepower. A stupid person has a weak or inefficient “cognitive CPU”.
Craziness is when the output of the “program” doesn’t correlate reliably with reality due to bugs in the “source code”. A crazy person has a flawed “cognitive algorithm”.
It seems that in humans, source code can be revised to a certain degree, but processing power is difficult (though not impossible) to upgrade.
So calling someone crazy (for the time being) is certainly different from calling someone stupid.
- 7 Jan 2010 21:28 UTC; 4 points) 's comment on Rationality Quotes January 2010 by (
Your utility estimates at any given time should already take into account all of the data available to you at that time, including your previous estimates.
In other words, if you decide you don’t want to go to a movie you’ve already purchased a ticket for, that decision has already been influenced by the knowledge that you did want to go to the movie at some point, so there’s no reason to slide your estimate again.
Stop equating skills with intelligence.
I live in San Clemente, but I’d be willing to drive anywhere in Orange County for an occasional meetup.
I chose to believe in the existence of God—deliberately and consciously. This decision, however, has absolutely zero effect on the actual existence of God.
If you know your belief isn’t correlated to reality, how can you still believe it?
To be fair, he didn’t say that the actual existence of God has absolutely zero effect on his decision to believe in the existence of God.
His acknowledgement that the map has no effect on the territory is actually a step in the right direction, even though he has many more steps to go.
A banal one is that misinforming takes effort and not informing saves effort.
That’s an important distinction. In both scenarios, the Carpenter suffers the same disutility, but the utility for Walrus is higher for “secret” than for “lies” if his utility function values saving effort. Perhaps that’s the reason we don’t feel morally obligated to walk the streets all day yelling correct information at people even though many of them are uninformed.
However, this rationalization breaks down in a scenario where it takes more effort to keep a secret than to share it (such as an interrogation), although I assume our intuitions regarding such a scenario would likewise change.
My hypothesis is that she simply meant, “It makes me happy to pretend that people are nicer than they really are.”
I don’t understand your objection to anonymous review on the basis of accountability. Doesn’t “anonymous review” in this context just mean that the reviewers don’t know the authors and affiliations of the papers they’re reviewing? In that case, what is there to be accountable for? The reviewers themselves aren’t any more anonymous in “anonymous review” than in standard review, are they?
For simplicity, Occam’s razor is often cited as “choose the simplest hypothesis” even when it’s more appropriate to employ its original definition as the principle that one should favor the explanation that requires the fewest assumptions.
I agree that less_schlong shouldn’t be citing Occam’s razor as some fundamental law of the universe, but I do think it’s obvious that all things being equal, we should attempt to minimize speculative assumptions.
I plan to attend with a guest.