Get enough luxury apartments in an area and all of a sudden the neighborhood becomes a lot more attractive to people with money. This puts pressure on landlords to renovate their buildings to cater to the new customers (and charge a lot more). Even with rent controls in place, there are loopholes to be exploited.
Epictetus
This sounds like an explanation for the old adage: “Go with your gut”. If your brain is a lot better at recognizing patterns than it is at drawing conclusions through a chain of reasoning, it seems advisable to trust that which your brain excels at. Something similar is brought up in The Gift of Fear, where the author cites examples where the pattern-recognition signaled danger, but people ignored them because they could not come up with a chain of reasoning to support that conclusion.
Sufficiently high quality mathematicians don’t make their discoveries through reasoning. The mathematical proof is the very last step: you do it to check that your eyes weren’t deceiving you, but you know ahead of time that it’s your eyes probably weren’t deceiving you. Given that this is true even in math, which is thought of as the most logically rigorous subject, it shouldn’t be surprising that the same is true of epistemic rationality across the board.
Interesting that you bring up this and Poincare’s experience. Jacques Hadamard wrote a book examining this phenomenon based on information he gathered from other mathematicians as well as his (layman’s) knowledge of the psychology of the day. His conclusions bore several similarities to what you’re trying to explain in this post. He did, however, note that experiences like Poincare’s generally only took place if the researcher in question spent a lot of time working on the problem in the old “chain of reasoning” way, with the pattern often becoming clear some weeks or months later, after the researcher had moved on to a different problem. Perhaps this is what constitutes training one’s brain to see patterns.
If you plan on investing now and letting the money sit there for the next few decades, stocks are the way to go. The occasional slump won’t hurt much in the long run.
If you plan on withdrawing money every year for living expenses, then things get tricky. Taking out a fixed amount each year will amplify the effects of stock market slumps. You might get low enough that you’re withdrawing all your returns for the year. That leaves you running in place, falling behind inflation, and one recession away from getting completely wiped out.
The risk here is the risk of ruin. Once your investment hits $0, you’re out of the game.
I don’t know that there is a rebuttal. Wireheading goes all the way back to Homer:
They started at once, and went about among the Lotus-eaters, who did them no hurt, but gave them to eat of the lotus, which was so delicious that those who ate of it left off caring about home, and did not even want to go back and say what had happened to them, but were for staying and munching lotus with the Lotus-eaters without thinking further of their return; nevertheless, though they wept bitterly I forced them back to the ships and made them fast under the benches. Then I told the rest to go on board at once, lest any of them should taste of the lotus and leave off wanting to get home
--The Odyssey
The solution there seems to be not to do it in the first place. It has long been a theme of dystopian fiction that our technology will erode or destroy what it means to be human. Playing around with the brain is definitely going to change things, most likely in ways we can’t quite predict—not to mention any accidental damage caused by novel methods. The only rebuttal I can think of is that our current technology is too crude and barbarous to make such modifications worth the drawbacks.
I think there’s still some solace though. There’s always a reaction to technology that tries to become too invasive. Many are willing to use drugs to regulate their moods, but there’s also strong counter-pressure. New technology doesn’t spread overnight. We’ll have plenty of examples of brain-modified people before the methods become widespread. I’d like to think people will be able to decide whether it’s worth the cost before they jump headlong into it.
Most of my friends and acquaintances are committed to long-term relationships (mid-late 20s age group). I’ve had trouble in this area due to certain personal reasons, but my personal observations lead me to believe that I’m atypical in this regard.
It still seems single people aren’t seeking or successfully finding relationships at a rate which corresponds well to genuine preferences for a relationship. Why aren’t single people trying harder to find relationships?
It’s possible they just don’t know what they’re doing or are paralyzed by anxiety when it comes to romance.
I have a hard time learning from major errors. Typically, I’ll misidentify the cause of the error and end up taking something I did right and changing it for the worse. The fix then requires not just correcting the bad behavior, but also undoing the damage from my previous attempts to rectify things.
A president or prime minister will be the public face of the nation. He’d be expected to meet with foreign dignitaries and speak in public. At the very least, a debate gives people an idea of how their leaders carry themselves when under stress in full public view.
However, if I had to choose between live, in-person debates and a written format where candidates had plenty of time to formulate their thoughts and gather supporting evidence, I’d take the written format every time.
Making prepared statements is usually done by a politician’s staff. The candidate might make some suggestions and approve/reject a draft, but otherwise such a debate would be staffers vs. staffers.
I’ll also note that once upon a time, people attended public debates in part for entertainment.
On a related thought, I’ve idly mused on multiple occasions that live in-person political debates seem overweighted in importance.
Overall I do agree. I seldom watch debates, because what the candidates do say is often just a condensed version of the party position that shows up on any one of a dozen websites.
Pedantry and mastery are opposite attitudes toward rules.
To apply a rule to the letter, rigidly, unquestioningly, in cases where it fits and in cases where it does not fit, is pedantry. Some pedants are poor fools; they never did understand the rule which they apply so conscientiously and so indiscriminately. Some pedants are quite successful; they understood their rule, at least in the beginning (before they became pedants), and chose a good one that fits in many cases and fails only occasionally.
To apply a rule with natural ease, with judgment, noticing the cases where it fits, and without ever letting the words of the rule obscure the purpose of the action or the opportunities of the situation, is mastery...
...if you are inclined to be a pedant and must rely upon some rule learn this one: Always use your own brains first.
--George Polya, How to Solve It
I don’t think I’d ever reach that conclusion based on someone’s self-reporting. Too prone to bias.
So what would convince me? Well, the same way MLK convinced me: actions. Not in the sense of having to lead a civil rights movement, but rather in the sense of displaying that level of love and compassion when there is a cost to doing so. Are you so committed that you’d risk imprisonment or assassination? There’s really no way for me to tell unless real life tests your mettle. I admit, it’s a high bar. Extraordinary claims require extraordinary evidence.
How then would I gauge the love and compassion of someone who had to hazard neither life nor liberty? I have on occasion witnessed people perform generous acts which would not have even occurred to me, but upon seeing them I could not doubt their rectitude. More common is just a general pattern of behavior: how an individual interacts with others.
Sure.
Here are my observations: It’s a common tactic among politicians to favorably compare themselves to famous historical figures. It’s common among cranks to compare their own struggles to the persecution of Galileo. In general, there’s a rhetorical device of people comparing themselves to famous figures in order to imply that they have other characteristics in common. This has led me to assign a very low prior probability to such a comparison being wholly innocuous.
As a result, when I see such a statement made, my reaction is to become more a lot cynical about the piece and to question the author’s motives.
That specific line was my perception, yes.
The bit that followed it was intended as a general statement.
Yes, so you’re doing what everyone else did throughout my life: you’re attributing unflattering motivations to me that I don’t have. It’s not just you, it’s almost everyone who I’ve interacted with.
My point is that comparing yourself favorably to someone like Gandhi is a very common rhetorical tactic. For example, here’s Dan Quayle comparing himself to JFK. While the statement he made was literally true, it was perceived as implying other similarities and his opponent called him on it.
From the other comments it seems that you did not intend to imply other similarities to MLK or Gandhi. I wish to convey that even if you personally don’t have this motive, in common use such comparisons do have this motive. Therefore, for someone hearing such a comparison made, there’s a very high prior that such a motive exists.
My current hypothesis is that you’re not doing this, you don’t have some sort of evil Hansonian agenda, rather, the situation is instead that you don’t know that it’s possible for humans to rewire their motivations so as to be almost completely unrelated to relative status.
The philosopher who provided my username counseled indifference to status. I am familiar with the notion.
My objection is not about motivation, but about motivation as perceived by an outside observer. I take it as a general principle that the message one intends to send, the message one actually sends, and the message received need not be the same. Consider Polya’s traditional math professor: “He says a, he writes b, he means c; but it really should be d.” What are the poor students to make of this muddle?
Back to our hypothetical observer. If the observer does not know your mind, all he has to go on are the literal meaning of the words you use and any connotations associated with them via common usage or community standards. It is possible for these connotations to warp the meaning to something altogether different from what you intended, even if the observer is wholly neutral. Real people have their own filters and perceptions, which can further change the meaning. I have a vague hypothesis that much social convention is just a way of standardizing communication to avoid these kinds of problems.
I think on Less Wrong of all places people should be able to say things like that if they think they are true.
What’s the prior on understanding the mind of MLK or Gandhi well enough to make a realistic comparison? And why choose people who are practically venerated as modern saints? I don’t think that such a comparison is ever truly innocuous. It’s a common Dark Arts ploy to associate oneself with beloved historical figures in the hope of basking in the light of their greatness.
Not being able to point to works as remarkable as some of the most remarkable historical figures in our current cultural awareness is very scant evidence that someone does not experience universal love and compassion of the same sort.
The objection isn’t whether someone actually experienced compassion similar to that of Gandhi. The objection is that comparing oneself to Gandhi raises the specter of the Association Fallacy.
The prior probability could be anything you want. Laplace advised taking a uniform distribution in the absence of any other data. In other words, unless you have some reason to suppose one outcome is more likely than another, you should weigh them equally. For the sunrise problem, you could invoke the laws of physics and our observations of the solar system to assign a prior probability much greater than 0.5.
Example: If I handed you a biased coin and asked for the prior probability that it comes up heads, it would be reasonable to suppose that there’s no reason to suppose it biased to one side over the other and so assign a prior of 0.5. If I asked about the prior probability of three heads in a row, then there’s nothing stopping you from saying “Either it happens or it doesn’t, so 50/50”. However, if your prior is that the coin is fair, then you can compute the prior for three heads in a row as 1⁄8.
At least nowadays many places bother to train TAs. My understanding is that not too long ago, the TA was just handed a syllabus and told to teach a class. Some schools had a reputation for admitting excess graduate students just to serve as TAs for a bit before being shown the door.
However, there are some universities that focus on quality undergraduate education. In those places, teaching ability is a big part of the hiring process and people have been denied tenure over poor teaching. It’s the big research universities that have historically been lax in their teaching standards.
There are probably cases where it’s rational for a real-estage agent to sell a property before an auction. An auction could well return less money than the other party offered.
Auctions aren’t free. There are fees to list items at auction and the organization running the auction is likely to collect a certain percentage of the final price as well.
Loosemore is assuming that the AI will be homogeneous, and then wondering how contradictory beliefs can co exist in such a system, what extra component firewalls off the contradiction
How do you check for contradictions? It’s easy enough when you have two statements that are negations of one another. It’s a lot harder when you have a lot of statements that seem plausible, but there’s an edge case somewhere that messes things up. If contradictions can’t be efficiently found, then you have to deal with the fact that they might be there and hope that if they are, then they’re bad enough to be quickly discovered. You can have some tests to try to find the obvious ones, of course.
Pretty much every situation in real life involves some variant on the prisoner’s dilemma, almost always with etiquette, ethical, or legal prohibitions against defection.
Chicken comes up fairly often and there mutual defection is by far the worst outcome for either party (i.e. if you knew the other guy wanted to defect, you’d cooperate).
Given one set of assumptions, one systems architecture, it is entirely natural that an AI would pursue its goals against is own information, and against the protests of humans;. But on other assumptions, it is utterly bizarre that an AI would ever do that....
If one of its parameters is “do not go against human protests of magnitude greater than X”, then it will not pursue a course of action if enough people protest it. But in this case, avoiding strong human protest is part of its goals.
The AI is ultimately following some procedure, and any outside information or programmer intention or human protest is just some variable that may or may not be taken into consideration.
--Francis Bacon, On Revenge