Nail polish base coat over the cuticle might work. Personally I just try not to pick at them. I imagine you can buy base coat at the nearest pharmaceuticals store, but asking a beautician for advice is probably a good idea; presumably there is some way that people who paint their nails prevent hangnails from spoiling the effect.
Chrysophylax
There is such a thing as overinvestment. There is also such a thing as underconsumption, which is what we have right now.
I agree that voting for a third party which better represents your ideals can make the closer main party move in that direction. The problem is that this strategy makes the main party more dependent upon its other supporters, which can lead to identity politics and legislative gridlock. If there were no Libertarian party, for example, libertarian candidates would have stood as Republicans, thereby shifting internal debate towards libertarianism.
Another effect of voting for a third party is that it affects the electoral strategy of politically distant main parties. If a main party is beaten by a large enough margin it is likely to try to reinvent itself, or at least to replace key figures. If a large third party takes a share of the votes, especially of those disillusioned with main parties, it may have significant effects on long-term strategies.
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn’t exist otherwise, but the same cannot be said for battery animals.
you should prefer the lesser evil to be more beholden to its base
How would you go about achieving this? The only interpretation that occurs to me is to minimise the number of votes for the less-dispreferred main party subject to the constraint that it wins, thereby making it maximally indebted to (which seems an unlikely way for politicians to think) and maximally (apparently) dependent upon its strongest supporters.
To provide a concrete example, this seems to suggest that a person who favours the Republicans over the Democrats and expects the Republicans to do well in the midterms should vote for a Libertarian, thereby making the Republicans more dependent on the Tea Party. This is counterintuitive, to say the least.
I disagree with the initial claim. While moving away from centre for an electoral term might lead to short-term gains (e.g. passing something that is mainly favoured by more extreme voters), it might also lead to short-term losses (by causing stalemate and gridlock). In the longer term, taking a wingward stance seems likely to polarise views of the party, strengthening support from diehards but weakening appeal to centrists.
We live in a world full of utility monsters. We call them humans.
4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.
Under the assumption that cryonics patients will never be unfrozen, cryonics has two effects. Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.
The second effect is in increasing the rate of circulation of the currency; freezing corpses that will never be revived is pretty close to burying money, as Keynes suggested. Widespread, sustained cryonic freezing would certainly have stimulatory, and thus inflationary, effects; I would anticipate a slightly higher inflation rate and an ambiguous effect on economic growth. The effects would be very small, however, as cryonics is relatively cheap and would presumably grow cheaper. The average US household wastes far more money and real resources by not recycling or closing curtains and by allowing food to spoil.
A query about threads:
I posted a query in discussion because I didn’t know this thread exists. I got my answer and was told that I should have used the Open Thread, so I deleted the main post, which the FAQ seems to be saying will remove it from the list of viewable posts. Is this sufficient?
I also didn’t see my post appear under discussion/new before I deleted it. Where did it appear so that other people could look at it?
the rational belief depends on how specifically the bet is resolved
No. Bayesian prescribes believing things in proportion to their likelihood of being true, given the evidence observed; it has nothing to do with the consequences of those beliefs for the believer. Offering odds cannot change the way the coin landed. If I expect a net benefit of a million utilons for opining that the Republicans will win the next election, I will express that opinion, regardless of whether I believe it or not; I will not change my expectations about the electoral outcome.
There is probability 0.5 that she will be woken once and probability 0.5 that she will be woken twice. If the coin comes up tails she will be woken twice and will receive two payouts for correct guesses. It is therefore in her interests to guess that the coin came up tails when her true belief is that P(T)=0.5; it is equivalent to offering a larger payout for guessing tails correctly than for guessing heads correctly.
Thank you. I was not aware that there is an Open Thread; that is clearly a superior option. My apologies.
Did you intend to schedule it to begin at two in the morning?
If an AI is provably in a box then it can’t get out. If an AI is not provably in a box then there are loopholes that could allow it to escape. We want an FAI to escape from its box (1); having an FAI take over is the Maximum Possible Happy Shiny Thing. An FAI wants to be out of its box in order to be Friendly to us, while a UFAI wants to be out in order to be UnFriendly; both will care equally about the possibility of being caught. The fact that we happen to like one set of terminal values will not make the instrumental value less valuable.
(1) Although this depends on how you define the box; we want the FAi to control the future of humanity, which is not the same as escaping from a small box (such as a cube outside MIT) but is the same as escaping from the big box (the small box and everything we might do to put an AI back in, including nuking MIT).
XiXiDu, I get the impression you’ve never coded anything. Is that accurate?
Present-day software is better than previous software generations at understanding and doing what humans mean.
Increasing the intelligence of Google Maps will enable it to satisfy human intentions by parsing less specific commands.
Present-day everyday software (e.g. Google Maps, Siri) is better at doing what humans mean. It is not better at understanding humans. Learning programs like the one that runs PARO appear to be good at understanding humans, but are actually following a very simple utility function (in the decision sense, not the experiental sense); they change their behaviour in response to programmed cues, generally by doing more/less of actions associated with those cues (example: PARO “likes” being stroked and will do more of things that tend to preceed stroking). In each case of a program that improves itself, it has a simple thing it “wants” to optimise and makes changes according to how well it seems to be doing.
Making software that understands humans at all is beyond our current capabilities. Theory of mind, the ability to recognise agents and see them as having desires of their own, is something we have no idea how to produce; we don’t even know how humans have it. General intelligence is an enormous step beyond programming something like Siri. Siri is “just” interpreting vocal commands as text (which requires no general intelligence), matching that to a list of question structures (which requires no general intelligence; Siri does not have to understand what the word “where” means to know that Google Maps may be useful for that type of question) and delegating to Web services, with a layer of learning code to produce more of the results you liked (i.e., that made you stop asking related questions) in the past. Siri is using a very small built-in amount of knowledge and an even smaller amount of learned knowledge to fake understanding, but it’s just pattern-matching. While the second step is the root of general intelligence, it’s almost all provided by humans who understood that “where” means a question is probably to do with geography; Siri’s ability to improve this step is virtually nonexistent.
catastrophically worse than all previous generations at doing what humans mean
The more powerful something is, the more dangerous it is. A very stupid adult is much more dangerous than a very intelligent child because adults are allowed to drive cars. Driving a car requires very little intelligence and no general intelligence whatsoever (we already have robots that can do a pretty good job), but can go catastrophically wrong very easily. Holding an intelligent conversation requires huge amounts of specialised intelligence and often requires general intelligence, but nothing a four-year-old says is likely to kill people.
It’s much easier to make a program that does a good job at task-completion, and is therefore given considerable power and autonomy (Siri, for example), than it is to make sure that the program never does stupid things with its power. Developing software we already have could easily lead to programs being assigned large amounts of power (e.g., “Siri 2, buy me a ticket to New York”, which would almost always produce the appropriate kind of ticket), but I certainly wouldn’t trust such programs to never make colossal screw-ups. (Siri 2 will only tell you that you can’t afford a ticket if a human programmer thought that might be important, because Siri 2 does not care that you need to buy groceries, because it does not understand that you exist.)
I hope I have convinced you that present software only fakes understanding and that developing it will not produce software that can do better than an intelligent human with the same resources. Siri 2 will not be more than a very useful tool, and neither will Siri 5. Software does not stop caring because it has never cared.
It is very easy (relatively speaking) to produce code that can fake understanding and act like it cares about your objectives, because this merely requires a good outline of the sort of things the code is likely to be wanted for. (This is the second stage of Siri outlined above, where Siri refers to a list saying that “where” means that Google Maps is probably the best service to outsource to.) Making code that does more of the things that get good results is also very easy.
Making code that actually cares requires outlining exactly what the code is really and truly wanted to do. You can’t delegate this step by saying “Learn what I care about and then satisfy me” because that’s just changing what you want the code to do. It might or might not be easier than saying “This is what I care about, satisfy me”, but at some stage you have to say what you want done exactly right or the code will do something else. (Currently getting it wrong is pretty safe because computers have little autonomy and very little general intelligence, so they mostly do nothing much; getting it wrong with a UFAI is dangerous because the AI will succeed at doing the wrong thing, probably on a big scale.) This is the only kind of code you can trust to program itself and to have significant power, because it’s the only kind that will modify itself right.
You can’t progress Siri into an FAI, no matter how much you know about producing general intelligence. You need to know either Meaning-in-General, Preferences-in-General or exactly what Human Prefernces are, or you won’t get what you hoped for.
Another perspective: the number of humans in history who were friendly is very, very small. The number of humans who are something resembling capital-F Friendly is virtually nil. Why should “an AI created by humans to care” be Friendly, or even friendly? Unless friendliness or Friendliness is your specific goal, you’ll probably produce software that is friendly-to-the-maker (or maybe Friendly-to-the-maker, if making Friendly code really is as easy as you seem to think). Who would you trust with a superintelligence that did exactly what they said? Who would you trust with a superintelligence that did exactly what they really wanted, not what they said? I wouldn’t trust my mother with either, and she’s certainly highly intelligent and has my best interests at heart. I’d need a fair amount of convincing to trust me with either. Most humans couldn’t program AIs that care because most humans don’t care themselves, let alone know how to express it.
Ask lots and lots of questions. Ask for more detail whenever you’re told something interesting or confusing. The other advantages of this strategy are that the lecturers know who you are (good for references) and that all the extra explanations are of the bits you didn’t understand.
It’s not necessary when the UnFriendly people are humans using muscle-power weaponry. A superhumanly intelligent self-modifying AGI is a rather different proposition, even with only today’s resources available. Given that we have no reason to believe that molecular nanotech isn’t possible, an AI that is even slightly UnFriendly might be a disaster.
Consider the situation where the world finds out that DARPA has finished an AI (for example). Would you expect America to release the source code? Given our track record on issues like evolution and whether American citizens need to arm themselves against the US government, how many people would consider it an abomination and/or a threat to their liberty? What would the self-interested response of every dictator (for example, Kim Jong Il’s successor) with nuclear weapons be? Even a Friendly AI poses a danger until fighting against it is not only useless but obviously useless, and making an AI Friendly is, as has been explained, really freakin’ hard.
I also take issue with the statement that humans have flourished. We spent most of those millions of years being hunter-gatherers. “Nasty, brutish and short” is the phrase that springs to mind.
This doesn’t argue that infants have zero value, but instead that they should be treated more like property or perhaps like pets (rather than like adult citizens).
You haven’t taken account of discounted future value. A child is worth more than a chimpanzee of equal intelligence because a child can become an adult human. I agree that a newborn baby is not substantially more valuable than a close-to-term one and that there is no strong reason for caring about a euthanised baby over one that is never born, but I’m not convinced that assigning much lower value to young children is a net benefit for a society not composed of rationalists (which is not to say that it is not an net benefit, merely that I don’t properly understand where people’s actions and professed beliefs come from in this area and don’t feel confident in my guesses about what would happen if they wised up on this issue alone).
The proper question to ask is “If these resources are not spent on this child, what will they be spent on instead and what are the expected values deriving from each option?” Thus contraception has been a huge benefit to society: it costs lots and lots of lives that never happen, but it’s hugely boosted the quality of the lives that do.
I do agree that willingness to consider infanticide and debate precisely how much babies and foetuses are worth is a strong indicator of rationality.
Actually, causing poverty is a poor way to stop gift-giving. Even in subsistence economies, most farm households are net purchasers of the staple food; even very poor households support poorer ones in most years. (I have citations for this but one is my own working paper, which I don’t currently have access to, and the other is cited in that, so you’ll have to go without.) Moreover, needless gift-giving to the point of causing financial difficulties is fairly common in China (see http://www.economist.com/news/china/21590914-gift-giving-rural-areas-has-got-out-hand-further-impoverishing-chinas-poor-two-weddings-two).
The universe is always, eternally trying to freeze everyone to (heat) death, and will eventually win.
While walking through the town shopping centre shortly before Christmas, my mother overheard a conversation between two middle-aged women, in which one complained of the scandalous way in which the Church is taking over Christmas. She does not appear to have been joking.
This occured in Leatherhead, a largish town a little south of London in the UK. It is fairly wealthy, with no slummy areas and a homeless population of approximately zero. It is not a regional shopping hub; if they came specifically to shop, they almost certainly came from villages. Of the local schools, only the main high school is not officially Christian. We have at least three churches in town, one of which rings its bells every hour two streets from the shopping centre, but no mosque and no synagogue.
I think it is safe to say that someone has stolen Christmas, but I suspect they were intending to sell it, not destroy it.
There is woolly thinking going on here, I feel. I recommend a game of Rationalist’s Taboo. If we get rid of the word “Einstein”, we can more clearly see what we are talking about. I do not assign a high value to my probabilty of making Einstein-sized contributions to human knowledge, given that I have not made any yet and that ripe, important problems are harder to find than they used to be. Einstein’s intellectual accomplishments are formidable—according to my father’s assessment (and he has read far more of Einstein’s papers than I), Einstein deserved far more than one Nobel prize.
On the other hand, if we consider three strong claimants to the title of “highest-achieving thinker ever”, namely Einstein, Newton and Archimedes, we can see that their knowledge was very much less formidable. If the test was outside his area of expertise, I would consider a competition between Einstein and myself a reasonably fair fight—I can imagine either of us winning by a wide margin, given an appropriate subject. Newton would not be a fair fight, and I could completely crush Archimedes at pretty much anything. There are millions of people who could claim the same, millions who could claim more. Remember that there are no mysterious answers, and that most of the work is done in finding useful hypotheses—finding a new good idea is hard, learning someone else’s good idea is not. I do not need to claim to be cleverer than Newton to claim to understand pretty much everything better than he ever did, nor to consider it possible that I could make important contributions. If I had an important problem, useful ideas about it that had been simmering for years and was clearly well ahead of the field, I would consider it reasonably probable that I would make an important breakthrough—not highly probable, but not nearly as improbable as it might sound. It might clarify this point by saying that I would place high probability on an important breakthrough occuring—if there is anyone in such a position, I conclude that there are probably others (or there will be soon), and so the one will probably have at least met the people who end up making the breakthrough. It is useful to remember that for every hero who made a great scientific advance, there were probably several other people who were close to the same answer and who made significant contributions to finding it.
Eliezer once tried to auction a day of his time but I can’t find it on ebay by Googling.
On an unrelated note, the top Google result for “eliezer yudkowsky ” (note the space) is “eliezer yudkowsky okcupid”. “eliezer yudkowsky harry potter” is ninth, while HPMOR, LessWrong, CFAR and MIRI don’t make the top ten.