Stupid Questions June 2015
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don’t be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people’s admitting ignorance and don’t mock them for it, as they’re doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the “stupid_questions” tag.
What contingencies should I be planning for in day to day life? HPMOR was big on the whole “be prepared” theme, and while I encounter very few dark wizards and ominous prophecies in my life, it still seems like a good lesson to take to heart. I’d bet there’s some low-hanging fruit that I’m missing out on in terms of preparedness. Any suggestions? They don’t have to be big things—people always seem to jump to emergencies when talking about being prepared, which I think is both good and bad. Obviously certain emergencies are common enough that the average person is likely to face one at some point in their life, and being prepared for it can have a very high payoff in that case. But there’s also a failure mode that people fall into of focusing only on preparing for sexy-but-extremely-low-probability events (I recall a reddit thread that discussed how to survive in case an airplane that you’re on breaks up, which...struck me as not the best use of one’s planning time). So I’d be just as interested in mundane, everyday tips.
(Note: my motivation for this is almost exclusively “I want to look like a genius in front of my friends when some contingency I planned for comes to pass”, which is maybe not the best motivation for doing this kind of thing. But when I find myself with a dumb-sounding motive for doing something I rationally endorse anyway, I try to take advantage of the motive, dumb-sounding or not.)
Those related to what you do and where you go in day to day life. The only people who need to worry about a micrometeorite punching a hole in the spaceship get training for it already.
These might include such things as: locking yourself out of your house, having an auto breakdown, being confronted by a mugger, being in an unfamiliar building when the fire alarm goes off, coming upon the scene of a serious accident, where to go and how to get there when widespread flooding is imminent, being stranded in a foreign country without funds or a ticket out, when to see a doctor when a mole you’ve always had starts growing, getting old, and so on.
Do you have insurance for anything? The list of what it covers is a list of contingencies. If it’s worth spending money for the monetary compensation, it’s worth thinking about how to deal with it if it happens, and how to stop it happening.
I am by no means an expert, but here are a couple of options that come to mind. I came up with most of these by thinking “what kind of emergency are you reasonably likely to run into at some point, and what can you do to mitigate them?”
Learn some measure of first aid, or at least the Heimlich maneuver and CPR.
Keep a Seat belt cutter and window breaker in your glove compartment. And on the subject, there are a bunch of other things that you may want to keep in your car as well.
Have an emergency kit at home, and have a plan for dealing with natural disasters (fire, storms, etc). If you live with anyone, make sure that everyone is on the same page about this.
On the financial side, have an emergency fund. This might not impress your friends, but given how likely financial emergencies (e.g. unexpectedly losing a job) are relative to other emergencies, this is a good thing to plan for nonetheless. I think the standard advice is to have something on the order 3-6 months of income tucked away for a rainy day.
3-6 months? People don’t go on piling up savings indefinitely? How else do you retire? I mean… there is state pension in the country I live in but I would not count it not going bust in 30 years so I always assumed I will have what I save and then maybe the state pays a bonus.
The 3-6 months is in a liquid savings account. Beyond that, you want your money in investments that will earn interest. They will be more volatile, so aren’t advisable as an emergency fund. They can also be harder to access.
You are of course entirely correct in saying that this is far too little to retire on. However, it is possible to save without being able to liquidate said saving; for example by paying down debts. The Emergency Fund advice is that you should make a point to have enough liquid savings tucked away to tide you over in a financial emergency before you direct your discretionary income anywhere else.
Ah… I see. We keep most of our savings liquid. Safe i.e. government guaranteed investments at the biggest banks here are like 0.5% a year (the Kapitalsparbuch thing here in Austria), sot I don’t give a damn. And I would rather not gamble on the stock exchange. If I would see inflation I would care, but then I would also see more decent interest rates.
I think its a minimum of 3-6 months in a place where you can access it on short notice. of course the most common advice I see around LW, and other “If I knew this when I was 20 years younger...” type posts is—its never too early to be saving money up and building wealth.
Split it into “commutes by car” and “commutes by public transport”. I know when I used to own a car I was ridiculously prepared, even having a shovel in the trunk. Now with the subway, basically nothing—I have a whole city full of services to help me or anyone else in need. Or five hundred people on the subway train with various skills and items.
Very good ideas. Could be improved upon thus:
seat belt and window cutter for your key ring—always present, in the bus, train, other peoples cars.
Practice emergency procedures. To be actually able to perform them under stress.
Always carry a compact emergency kit with band-aid and one or two pads. Possibly a rescue blanket in your backpack.
Always have some cash handy (may depend on your country, municipality).
Quickclot:http://www.amazon.com/QuikClot-Advanced-Clotting-Sponge-1-75/dp/B00HJTH22E/ref=sr_1_1?ie=UTF8&qid=1433129845&sr=8-1&keywords=quickclot
(ref = slate star)
I don’t believe I’ve ever seen them in regular over-the-counter emergency kits, but making sure you have a tourniquet within (and know it’s use) reach can’t hurt. A pocket mask is great, too. An AED would probably be amazing if you have over a thousand US (or it’s equivalent) dollars to spend. Emergency treatments in general change pretty drastically every few years, so it would be an ongoing investment.
Have a good, working knowledge of what diabetes looks like, and various cardiac issues. While it may never happen to you, recognizing it and calling for help might save someone.
The training, naturally, is probably the hardest part to acquire, but I don’t think anyone who maximizes learning efficiency would have any trouble. The main issue is finding the right teachers.
While I could come up with a curriculum (I teach very basic survival/emergency treatment regularly) and put it in a nice app or something, the nature of those treatments are constantly changing, and I wouldn’t in good conscience disseminate that information without knowing that students would be able to stay up to date.
Until then, an EMT course can’t hurt. If you have stable employment and decent hours, you might be able to take advantage of night classes.
Knowing where the AEDs are in your workplace is a good idea, too!
How do you expect people who are not doctors or nurses to acquire that?
I suppose the problem with that statement was ‘good’ and ‘working’. It is far easier to simply memorize the symptoms and general knowledge, see what it looks like on assorted Youtube videos and browsing Figure 1, which is free and accessible to the public, than it is to acquire experience with it. This is the cheapest route, and getting that initial knowledge uses the same study techniques you would use to learn, say, microeconomics.
You don’t need too much (EMT and CPR) to be certified to become an Emergency Room Technician, solely to volunteer (as opposed to looking for employment) at an emergency room on weekends. The job mostly involves taking vitals, cleaning, and being ready to assist medical staff with menial labor. It’s probably the cheapest way to do it that I can think of. Close observation of what the doctors and nurses are doing would yield enough experience to recognize frequent issues surrounding diabetic and cardiac emergencies. EMT and CPR would incur the most costs, besides time on weekends.
I would like to see some data on whether they are useful, that is, how likely are you to find yourself in a situation where having them in your glove compartment will be important.
How do you determine whether a seat belt cutter/window breaker is a good one? Should you test it on an old rag or something?
I’m afraid I don’t know. You might get better luck making this question a top level post.
Often being prepared simply means that nobody notices anything being at odds. Don’t optimize for flashy solutions.
Fair.
What to do when things get lost
1) Your credit card
2) Your mobile phone
3) Your keys
What do you do when things you rely on break:
1) Your computer
2) Your car
Who to call?
1) Police imprisons you and charge you for a criminal act
2) You have a medical emergency (also set up a ICE contact list entry on your smart phone)
Identify local forms of natural disaster and what you intend to do in the circumstances. (bush-fires, earthquakes, typhoons, volcanoes, snowstorm, bear-pocalypse… whatever is normal in your area)
Identify what you plan to do in case of a power failure (owning some candles or something) depending on how bad the failure is and how long it lasts… I suggest owning a external battery block for phone charging—give extra peace of mind that you won’t run out of battery. (something like 15000mah should be plenty for most people)
(I have never suffered a technical failure but) preparing for a hard-drive failure, monthly backups, cloud storage… how will you manage if you suddenly are unable to earn money for 3-6 months? have savings; have a plan; programs like pocketbook; YouNeedABudget, calculate your burn-rate. Unexpected spends i.e. bills. Plumbing problems sometimes just happen in old houses—know what to do (how to change a washer etc.), Know how to open an S-bend if something is dropped down a pipe.
(basic first aid training was mentioned elsewhere but I wanted to add that we don’t train the heimlich manoeuvre in Australia)
know how to use a fire-extinguisher (you just have to read the instructions on the front; but maybe read them before you are in desperate need to know them)
qualified to drive larger vehicles can help in life.
knowledge of the law in some areas.
knowing how to cook delicious things on short notice (1-2 recipes that you can whip up really quick).
Install a smoke detector
Do martial arts training until you get the falling more or less right. While this might be helpful against muggers the main benefit is the reduced probability of injury in various unfortunate situation.
As someone with ~3 years of aikido experience, I second this.
I have a small multitool on my keychain and have for several years; it most often comes in handy as a bottle opener but the small pliers, knife, screwdriver, and wire stripper have all been used.
I have the entire road system of North America as of three months ago downloaded as about five gigabytes of data on my phone (yay micro SD cards) which comes in handy when driving through rural mountains.
A charged car battery exists in the trunk of my car for jumpstarting (and also for running my big computerized telescope out in the field, which is how I ensure it is kept charged, because it is dual use).
Mostly for hobby purposes but also for contingencies, I have built a portable solar-powered lithium-iron-phosphate battery pack that charges from sunlight at 25 watts, can store 200 watt-hours, and can discharge upwards of 100 watts at either 12 volts DC or 120 volts AC.
+1 to offline maps before travelling. An offline map nearly killed me and saved my life as well. Would suggest having it.
Take people you spend time with to first aid, heimlich, and CPR classes. You will need their help if you are the one choking or unable to breathe.
Build up enough stamina and physical fitness to run at both a sprint and for several minutes straight. Running away from a fight can be a very good strategy for not getting harmed.
Read the “Influence: Science and Practice” chapter that discusses Social Proof. I think it was chapter 4. The suggestions involved help avoid the bystander effect where a person in need is left alone and unassisted by a group of onlookers. The chapter deals with a few examples of effectively communicating and prompting someone to help you in an emergency situation. This is especially necessary in an urban environment.
Is there anything you keep expecting yourself to remember, but you don’t remember it? If so, make an extra effort to remember it, or make a note, or whatever might help.
Thanks for the great suggestions everyone. To follow up, here’s what I did as a result of this thread:
-Put batteries back in my smoke detector
-Backed up all of my data (hadn’t done this for many months)
-Got a small swiss army knife and put it on my keychain (already been useful)
-Looked at a few fire extinguishers to make sure I knew how to use them
-Put some useful things in my messenger bag (kleenex, pencil and paper) - I’ll probably try to keep adding things to my bag as I think of them, since I almost always have it with me
All of the car-related suggestions seemed like good ones, but weren’t applicable since I don’t own a car. Some other suggestions were good but required more time than I was willing to put in right now, or weren’t applicable for other reasons.
Things that are unsexy but I can actually verify as having been useful more than once:
In wallet, folded up tissue. For sudden attack of sniffles (especially on public transport), small cuts, emergency toilet paper.
In bag I carry every day: small pack of tissues, multitool, tiny torch, ibuprofin, pad and pencil, USB charging cable for phone, plastic spork, wet wipe thing from KFC (why do they always shovel multiples of those things in with my order ?).
Americancentric, but: I would suggest that if you have a phone, programming the numbers for the local police, a good urgent care clinic in your area (a wiser choice than the ER, when possible), and your garage (especially if you don’t have AAA). 911 is an important tool, but it is not always the best tool for the job, and the cost of updating your address book is essentially zero.
Oh, and perhaps the New York Public Library’s virtual reference service (depending on your long-distance plan).
(Dunno how they are sold in your country) a bottle of nitroglycerine or similar drugs, the instructions to which you know by heart, similar to Harry’s preparations? Considering that the probability of you encountering a stranger who has an emergency should be higher than the p of only you having it, unless there is a common cause. In case there is a common cause, well. Bring a gun?.. (At least it is small.) A notebook seems also a useful thing to havehave, with a pencil attached.
This needs to be qualified by lots of clauses like local law, necessary practice… But maybe a smaller alternative: pepper spray?
I’m about to graduate college and go into the real world, and I’m trying to get a job right now. If I’m not able to get one in the next few months, I will need some source of income. What are good reliable ways that I can convert time to money before I get a full-time job?
EDIT: I’m a physics/chemistry undergraduate with a decent GPA, and I have some skills in coding if that helps. I’m applying for jobs in software development and data analysis, and I’ve applied to 25 so far and have only heard back from 1. I’m going to keep applying and am fairly confident I’ll get something, but in case everything fails I want to have a backup.
Potentially relevant stuff from brief Google site searches:
“Studying and Part-time work/supplementary income”
“Interesting (or semi-interesting) part time jobs?”
saving money by doing tasks in cheaper, time-consuming ways
bisserlis talking about joining a temp agency
“low stress employment/ munchkin income thread”
“Physics grad student: how to build employability in programming & finance”
I feel like part-time work, and lightweight methods of converting time to money, have been chewed over in even more LW posts, but I can’t quickly dig them out.
Sending 25 resumes is one strategy, but there are others. I believe you should find a few companies you like, learn a lot about them, find someone who has a contact at them, and develop a relationship with that contact.
This website has a lot of elements I agree with in terms of trying to get a job: http://corcodilos.com/blog/7633/how-to-tease-a-job-interview-out-of-a-manager
I just went through the process of applying to software companies too. I get a strong impression that it’s a numbers game and low response rates are to be expected. Feel free to PM me if you have any specific questions.
If there’s a company that you really want to work for, then something like this seems like it’d be really hard to ignore.
Well there are a ton of low-skill jobs out there. And then there are higher skill jobs like tutoring that you may be able to pursue in the short term.
But why? I sense that right now, your time would be better spent on higher level actions like learning and thinking about what you want in life. This obviously depends on a lot of other things though. Like your goal and your financial situation.
I’m having some major psychological health issues lately and am feeling lost and hopeless. Ideally, I’d like to seek advice and/or counseling from someone in the EA/LW community because they would be better able to relate to my goals/motivations and might be able to offer me particularity useful advice. Is there anywhere I could go for this, anyone I can reach out to, or does anyone know of a psychiatrist/psychologist in the DC area who is in the EA/LW community? Thanks so much.
The therapists I know would strongly recommend seeing someone in person, rather than counsel over the internet. So if you can’t find a psychiatrist/psychologist in the DC area who is in the EA/LW community, I’d suggest you relax the EA/LW criterion rather than the DC area criterion. Good luck.
I am very far away and I don’t know if I can help. But send me a message and I can try.
What’s the deal with laundry detergent packaging? For instance, take a look at this http://news.pg.com/sites/pg.newshq.businesswire.com/files/image/image/Tide_Liquid_Detergent.jpg Nowhere on the package does it actually say it’s detergent! I guess they’re just relying on people knowing that Tide is a brand of detergent? Except that Tide also makes other products, such as fabric softener. And it’s not just Tide. http://www.freestufffinder.com/wp-content/uploads/2014/03/all-laundry.jpg http://dgc.imageg.net/graphics/product_images/pDGC1-10603813v380.jpg http://dgc.imageg.net/graphics/product_images/pDGC1-10603814v380.jpg http://dgc.imageg.net/graphics/product_images/pDGC1-12184807v380.jpg
Doing a google search, the only image that I came across of a bottle that actually says “detergent” is this: http://dovsbythecase.com/wp-content/uploads/2014/01/allhefreeclear1.jpg If you zoom in, way at the bottom, in tiny print, it says “detergent”. Maybe the other ones also say it, but they weren’t zoomable.
I had this problem with soap for a while (there was a “Dove isn’t soap!” campaign that didn’t say what it… like… was… and I switched to Ivory because I wanted soap.)
There’s a label on the back as well with details. The front label is a billboard, designed to get your attention and take advantage of brand loyalty, so yes—you are expected to know it’s detergent, and they are happy to handle the crazy rare edge-case person who does not recognize the brand. I suspect they also expect the supermarket you buy it at to have it in the “laundry detergents” section, likely with labels as well, so it’s not necessary on the front label.
There’s a laundry section, with detergent, fabric softeners, and other laundry-related products. I don’t think the backs generally say what the product is, and even if they do, that’s not very useful. And as I said, most laundry brands have non-detergent products. Not labeling detergent as detergent trains people to not look for the “detergent” label, which means that they don’t notice when they’re buying fabric softener or another product.
Actually—I took a closer look. The explanation is perhaps simpler.
Tide doesn’t make a stand-alone fabric softener. Or if they do—amazon doesn’t seem to have it? There’s TIde, and Tide with Fabric Softener, and Tide with a dozen other variants—but nothing that’s not detergent plus.
So—no point in differentiating. The little Ad-man in my said says “We don’t sell mere laundry detergent—we sell Tide!”
To put it another way—did you ever go buy to buy detergent, and accidentally buy fabric softener? Yeah, me neither. So—the concern is perhaps unfounded.
I dunno, the container I have says “detergent” on the front 3 times. In fact I think all of your pictures other than the Tide one contain “Detergent” in small print after the brand name, or at the bottom of the label.
I am a prominent LW poster; this is a through-away account because my girlfriend also uses LW.
I would like to propose to my girlfriend in the near future. For this I would like to use a diamond ring. I have never bought one before, so would appreciate any advice. The main things I would like help with:
Not paying extra due to ignorance
Ensuring she never has cause to regret the choice of stone/ring.
Anything else you think I should know.
Some background in case it helps:
I live in NYC, so have access to the diamond district.
I am leaning towards an artificial diamond, as it seems hard to guarantee conflict-free otherwise (which does not seem romantic!) and we are both pretty pro-science.
My price range is orgjrra bar naq gra gubhfnaq qbyynef, ohg V jbhyq cersre gbjneqf gur ybjre raq bs gur enatr
My girlfreind is neither unusually fat nor unusually skinny for an American of marriageable age. She is white.
She does not wear much jewelry. The stuff she has is mainly (fake?) yellow gold and silver, mainly gifts.
I am probably looking for a relatively simple design, round stone.
I do not recommend choosing a diamond. Diamonds are both less pretty than and more expensive than moissanite; if you have the budget for a diamond, you can get better for cheaper with moissanite. The exception is if you know for a fact the recipient is a natural stone chauvinist, which doesn’t sound like your situation at all (you basically can’t get natural moissanite). Bonus: moissanite is from SPACE.
If you are unwilling to consult her in advance on her taste in rocks, the safe choice is a gold-band solitaire with a round brilliant cut rock set in prongs. More expensive, more interesting, and also pretty safe is a “past present future” setting with three rocks, matching if you want to be conservative about it. I’m not sure what the conventional alloy for gold-looking jewelry that needs to not deform with use is, but if it looks like yellow gold and anyone makes a point of telling you how many carats it is, it’s probably good.
This is a perfect exemplar of something I really hate about this website. A poster asks for advice about how to buy a diamond, and instead he gets mostly replies saying “don’t buy a diamond.” I will try and actually be helpful.
My advice would be:
Your girlfriend probably has much stronger views than you do about jewellery, and after all she will be the one wearing it. Propose with a “fake” ring, then go shopping for the “real” ring together. I got a very nice-looking ring off Amazon for £10 to propose with. This minimises the chance of making a bad decision, and is also a romantic thing to do together.
If you do insist on buying the ring beforehand, make sure you can take it back. Many places will do returns within 30 days. Borrow a ring she finds comfortable to get the sizing.
Do not get hung up about high degrees of quality. VS2 clarity and H colour is plenty. She will never tell the difference between having a VS2 and VVS1 diamond on her finger—these differences are only visible when put next to another diamond in the right light, which will never happen.
But make sure the cut is top quality.
Shop around. My experience is in London, but over here the prices in the diamond quarter and online are about the same. Beware of anyone who won’t give you a straight price. Despite what anyone tells you, diamonds are close to commodities.
Make sure you get a certificate, and don’t buy anything with a non-GIA certificate.
She will be wearing the ring all the time, and indefinitely into the future, which means there will be inevitable wear-and-tear. So platinum is probably the best metal.
There is no reason to spend anything like the upper range of your budget. You can get an extremely nice (genuine) ring in the bottom half of that price range, and artificial will only be cheaper.
Being pedantic, the original question was
and your first suggestion was
This seems like a reasonable suggestion. But I think you applied the same heuristics as others, just less far. Those heuristics being “infer motives from question; give advice satisfying inferred motives”. The motive you inferred seems to have been “I’d like to propose, and I’d like my girlfriend to end up with a diamond ring”. Others seem to have inferred something closer to “I’d like to propose with a ring with a pretty stone in it”.
Basically I think that “being helpful” is a difficult game, and “answer the question as asked” doesn’t lead to optimal helpfulness, and I don’t have a good solution for this.
examples of reasonably pretty looking amazon rings http://www.amazon.com/AnaZoz-Jewelry-Elegant-Platinum-Engagement/dp/B00YJH9IG2/ref=sr_1_6?s=apparel&ie=UTF8&qid=1433199898&sr=1-6 http://www.amazon.com/AnaZoz-Platinum-Austrian-Crystals-Elements/dp/B00YJHD94O/ref=sr_1_30?s=apparel&ie=UTF8&qid=1433199898&sr=1-30
gold plated: http://www.amazon.com/AnaZoz-Jewelry-Elements-Austrian-Crystals/dp/B00YJHIMW8/ref=sr_1_14?s=apparel&ie=UTF8&qid=1433200135&sr=1-14
in the ~$100 range not the $5 range as above: http://www.amazon.com/Size-10-Sterling-Diamond-Wedding/dp/B00PDQY4SK/ref=sr_1_12?s=apparel&ie=UTF8&qid=1433200210&sr=1-12&keywords=gold+plated
Meta: You raise an interesting point about not getting the answers you want. Being aware of the barrier to communication I can only say, “be specific”. I have found similar problems when posting here and also in other critical-thought places. It led to my being specific in this recent discussion post twice over.
I would not be blaming the community for this result; but rather the clarity of the way the question was asked. The top post can be edited if needed; or asked again and phrased differently if necessary.
Also the original post did say
indicating an awareness of alternative options and a willingness to go for alternatives.
How would you respond if a poster asked for advice as to how to best transfer money to Nigeria in order to receive a large amount of money in payment for this service?
That is a really bad analogy. When sending money to Nigeria you expect more money and will receive nothing. When buying a diamond you expect a diamond and will receive a diamond. Your personal ideas about its value are quite irrelevant here.
I would respond with extra information in the areas of people who have experienced similar as well as advice in the area.
Err… I don’t know. Proposing with a fake £10 ring sounds cheesy to me. You can always go shopping together for the wedding bands :-)
GIA and AGS certificates are both fine. EGS and IGL are more iffy in the sense that they will grade a diamond higher than GIA or AGS would—downgrade their ratings one or two notches for comparison.
Well, the first choice is between yellow and white—some people want yellow (gold) jewelry. In white, do NOT buy white gold, it’s rhodium-coated and the coating wears off. You are supposed to renew it every few years. Buy either platinum (expensive) or palladium (less so).
I agree it would be cheesy to propose with something fake-looking, but you can buy a really nice-looking ring for that price, that she is unlikely to realise isn’t real (unless she’s a jeweller). I proposed that way and afterwards when I told my fiancee that we had to buy a real ring, she was surprised that the ring wasn’t real. Maybe I shouldn’t have told her :)
The problem with non-GIA certificates is that because GIA is the standard, the reason that anyone submitted to a non-GIA authority is that they think they’ll get a higher price if they sell it with a non-GIA certificate. In which case you, as customer, are paying more for the same diamond...
This is true. As the fiancee wears both gold and silver, I assumed she was OK with both.
A friend of mine proposed with an engraved multitool… that’s a very special pair of people though.
I know a guy who met his fiancee while working as a volunteer on an art installation in New York. He proposed with a nut (of the nut-and-bolt kind) from that installation :-/
She accepted :-)
Well, it’s hard to give actually useful advise in that category, but coming up with a reason not to buy a diamond is an easy way to signal your cleverness.
Don’t go to the diamond district. You’ll just get a lot of high-pressure sales tactics and, likely, misleading information.
Cubic zirconia? The main thing to be sure of is whether your GF is fine with that. If she is, just order a huge one online, they’ll be cheap.
Your price range for the complete ring or just for the stone? You can pick the stone and the design separately, that’s common.
Generally speaking, you need to figure out first if you want a natural diamond or a cubic zirconia stone—that will greatly affect your budget, the stone size (and so the ring design), etc.
Are you picking out the ring entirely on your own or you are consulting with your GF?
He likely means a synthetic, i.e. lab-grown, diamond. This site has the best Google SEO.
Unless you are sure your GF really likes to follow social traditions this may not be the best idea…
Our story: we avoided surprises and discussed thoroughly whether we want to spend a life together or not. It also included whether to go through the expense of a wedding or just live together. We concluded that a wedding is a nice thank-you ceremony to our parents, and besides the whole point is that we planned a child, otherwise we would just keep cohabiting, and she was afraid I could dump her into the difficult life of 35+ single moms later on so basically the wedding would be a way to promise in front of 50 relatives that I won’t. She felt she would not risk having a child otherwise. Thankfully diamonds are not a tradition in our country (they cost more than what savings a young-ish man usually has, and getting into debt even BEFORE the wedding / setting up the new home sounds really dangerous). But gold rings are. Anyway she strictly forbidden me to buy a gold ring because we need to rent a bigger apartment with a proper child bedroom and buy new furniture so it makes more sense to blow our savings on that. She said a silver band €300 tops. So I waited a few weeks to achieve at least a surprise about the timing, waited for a national holiday that was about some big battles and said “This day we remember men who did brave things, so it is a good time for me to do something brave and...” :-) Later on, I had some of my inherited gold jewelry molten down for the actual wedding bands. As a decoration, we decided that we will write into each others rings to the outside what virtue we need to work on the most for us to be happy. I need to work on my patience and she needs to work on her courage i.e. actually accepting job promotions offered so we wrote these on the rings as reminders.
Anyway this non-traditional approach worked pretty well for us, although it may feel a bit “coldly rational” and not too “romantic”. What I would propose on a meta-level is finding out how much your GF likes being romantic and how much she likes to follow social traditions and conventions. (And how much you like to follow them, and what it predicts about your long-term marriage stability. Are there any other social conventions that you would less like to follow?)
If you want to ensure she won’t regret the choice, go shopping together!
You will pay extra, as in you will pay more than the ring is worth. If you buy a diamond ring, turn around and try to sell it back, they’ll give you something like 30% for it.
Also, listen to this: http://freakonomics.com/2015/04/16/diamonds-are-a-marriage-counselors-best-friend-a-new-freakonomics-radio-podcast/
This has always struck me as such a strange argument against buying a diamond ring, because it’s true about every retail purchase. If you buy a chair, then turn around and try to sell it back to the store, you’d be lucky to get 30%, but no-one thinks that’s an argument for sitting on the floor. You buy a chair because you want to sit on it, not as the start of a complicated chair-resale scheme. Similarly, you buy a diamond ring because you (or your beloved) want to wear it.
Note: I am not blaming you in particular, because this is a popular argument, but talk about a selective demand for rigour!
If you’re not familiar with the diamond industry, you may want to read Diamonds are Bullshit (or watch this less formal video.
I don’t mean for this to be offensive, but I’ve always disapproved of the idea of purchasing diamonds, especially for an engagement.
There’s a lot of abuse, fraud and mistreatment that happens throughout their production and distribution (then again, this is true for a lot of industries...). From a physical standpoint, it’s x thousand dollars for a shiny rock (money that could have been used to do good). I get that people see it as a symbol of love and that it’s reasonable to pay that amount of money for the symbolic meaning you get in return. I just find it odd that people derive such meaning, given the realities that exist beneath the surface.
Do your criticisms also apply to artificial diamonds? It seems likely that ve knows something about the diamond industry, given
Woops, I missed that statement. My apologies.
There’s still a lot of stuff “beneath the surface”, and so I still think my criticisms still apply, but obviously a lot less so if they’re truly conflict-free.
The point of the diamond is to be a costly signal of commitment. In order to be a good signal, the shiny rock has to be useless. If it provided x thousand dollars of value, it would inherently be a poor signal.
Using the x thousands of dollars to do good might work as a signal if you wouldn’t have spent the money on doing good otherwise.
Not quite. Don’t forget that the guy gives the diamond to the girl. It becomes her property—there is a transfer of value ($) happening.
One of the signals there is “Look how large/expensive a rock I can afford” (which doesn’t require the rock to be useless) and another signal is “Look how much value I’m willing to give to you just in exchange for your goodwill and favour” (which also does not require the rock to be useless).
It’s a signal on both sides. She accepts the rock rather than telling you to give her x thousands of useful goods to show that what she wants from the process is commitment, not money.
Let me give you some more immediately useful advice: a recommendation. The New York Diamond Center at 65 Broadway (ignore the big-sounding name, it’s a small shop with an expert salesman) is where I bought both my engagement ring and our wedding bands. The salesman will provide useful and professional advice about the relationship between price, size, and quality. Importantly, he sells loose diamonds and orders the settings separately. You will get the GIA certificate from him for any diamond you buy. And then you also have your seller already picked out for wedding rings.
Do try to get an understanding of your girlfriend’s taste before going to pick out a ring and a setting. It should not be a surprise or a secret that you’re planning to propose, so talk about her taste in jewelry, whether she would prefer a more traditional round cut or one of the other more unique cuts, band color and design, etc. Also get her finger sized somewhere—even within average ranges, her finger could be anywhere from a 7 to a 9.
One important thing to remember on price is that it roughly scales with the square of the diamond’s size in carats, so that a 1 carat diamond will probably be about 4 times the price of a 0.5 carat diamond. The guy I recommended will show you what diamonds of different quality look like so you can see what you’re paying for.
Note that you can and should insure your diamond on your renter’s or homeowner’s policy. Your insurance broker will want a copy of your receipt and your GIA certificate.
One last tip, if you’re planning to propose out of town and need to keep it hidden through airport security, sneakily transfer the box from your pocket to inside your shoe, and then back to your pocket on the other side of the scanner. And then make sure it’s in the opposite pocket from your girlfriend on the flight so that the jewelry box digging into her hip doesn’t give it away.
Biased: From Australia. Get an opal? or more general advice “consider other stones to diamonds”.
Gold is pretty; White gold is too (and cheaper)
Bonus: Magnetic wedding band. neat, unusual, practical. (Disclaimer: I just purchased for testing out magnetic-sense https://www.supermagnetman.net )
My sister and her partner made their own wedding rings at a custom-wedding-ring-jewellery place, but I have no idea of the details.
Buy something with comparable resale value—a popular gem on a second hand jewellery item directly from the previous owner. Then, there is no need for her to regret anything cause she can sell it back if she needs to and possibly even profit.
What is the point of having separated Open Threads and Stupid Questions threads, instead of allowing “stupid questions” in OTs and making OTs more frequent?
The advantage to having Stupid Question threads is that it’s easier to make it clear that that the questions should be treated kindly.
You are allowed to ask in the open thread. I don’t think having it more often would help. The SQ thread is for things that you are embarrassed or afraid to ask elsewhere. Apparently some people have questions that they didn’t bring up before the first stupid questions thread.
Case in point—I, for one, would likely not have posted anything whatsoever were it not for Stupid Questions. There is enough jargon here that asking something reasonable can still be intimidating—what if it turns out to be common knowledge? Once you break the ice, it’s easier, but count this as a sample of 1 supporting it.
Suppose A and B are brother and sister. They have a son, C. C and B have a son, D. D and B have a son, E, and a daughter F. How genetically related will be the children of E and F, given they do not interbreed?
(This is actually the history of our cats.)
The simplistic approach is that A and B share 1⁄2 of their (variable) genes by virtue of being siblings, and so their child C will have that shared half, and half of the remainder (i.e. a quarter) will come from B, so C and B share 3⁄4 of their genes. By the same approach, D and B will share 7⁄8 of their genes, and thus E and F will have 7⁄8 shared for certain and 1⁄16 shared by chance, and so their children will share about 15/32ths of their genes, i.e. be about as related as actual siblings.
Thank you. (It is odd how difficult it is to suppose that a random cat is the result of sequential inbreeding—they really look no different than random cats.)
In related news, check out MawBTS’s comment on this Cochran post on inbreeding, on Cleopatra’s ancestry. Yikes.
As I understand it, the reason the simplistic approach doesn’t quite work is because the knowledge that a genetic combination produced a functioning adult allows you to update on the total degree of sharing / whether or not any of the ruinous parts were shared.
While reading up on Jargon in the wiki (it is difficult to follow some threads without it), I came across:
http://wiki.lesswrong.com/wiki/I_don%27t_know
The talk page does not exist, and I have no rights to create it, so I will ask here: If I say “I am thinking of a number—what is it?”—would “I don’t know” be not only a valid answer, but the only answer, for anyone other than myself?
The assertion the page makes is that “I don’t Know” is “Something that can’t be entirely true if you can even formulate a question.”—but this seems a counterexample.
I understand the point that is trying to be made—that “I don’t know” is often said even when you actually could narrow down your guess a great deal—but the assertion given is only partially correct, and if you base arguments on a string of mostly correct things, you can still end up wildly off-course in the end.
Am I perhaps applying rigor where it is inappropriate? Perhaps this is taken out of context?
If you, as a human, are thinking of a number, I can narrow it down a great deal from a uniform improper prior. I don’t really like that wiki entry, though—if you ask me to guess a number and I say “I don’t know,” it’s sure as heck not because either of us believes or is attempting to imply that I have literally no information about the problem.
I think the way in which that wiki entry is important is that “I don’t know” cannot be your only possible answer to a question. If there was a gun to my head, I could give guessing your number a pretty good try. But as triplets of words go, “I don’t know” serves a noble and practical purpose.
It hits a nerve with me. I do computer tech stuff, and one of the hardest things for people to learn, seemingly, is to admit they don’t actually know something (and that they should therefore consider, oh, doing research, or experiment, or perhaps seek someone with experience). The concept of “Well—you certainly can narrrow it down in some way” is lovely—but you still don’t actually know. The incorrect statement would be “I know nothing (about your number)”—but nobody actually says that.
I kinda flip it—we know nothing for sure (you could be hallucinating or mistaken) - but we are pretty confident about a great many things, and can become more confident. So long as we follow up “I don’t know” with ”… but I can think of some ways to try to find out”, it strikes me as simple humility.
Amusingly—“I am thinking of a number”—was a lie. So—there’s a good chance that however you narrowed it down, you were wrong. Fair’s fair—you were given false information you based that on, but still thought you might know more than you actually did. Just something to ponder.
Sorry, this was an useless post so now it’s gone
I don’t think this can be answered for the general case, because those peoples are/were very different from one another. The Pacific Northwest Coast tribes were relatively prosperous and well-fed due to their fishing and whaling. Further up north in Canada, tribes in much less desirable hunting grounds had so little that even in the 1940s their kids were considered seriously malnourished.
Fascinating!
Why are agricultural diets assumed to always be better than the wide range of possible hunter-gatherer diets that our species has spent megayears on? I mean sure, often more stable and efficient in terms of labor. But that’s not the same thing. To be fair there will be places with, say, iodine deficiency here and there.
There’s also the fact that a shitload of the areas conquered in the colonial rush were fully agricultural and even urbanized...
Agricultural diets are actually worse and led to a documented decrease in health—see e.g. here.
The article supports that agricultural diets were worse—but the hunter-gatherers were, as well. Nobody ate a lot back then, abundance is fairly new to humanity. The important part about agriculture is not that it might be healthier—far from it.
Agriculture (and the agricultural diets that go with it) allowed humanity luxuries that the hunter-gatherer did not have—a dependable food supply, and moreover a supply where a person could grow more food than they actually needed for subsistence. This is the very foundation of civilization, and all of the benefits derived from that—the freed up workers could spend their time on other things like permanent structures, research into new technologies, trade, exploration, that were simply impossible in hunter-gatherer society. You can afford to be sickish, as a society, if you can have more babies and support a higher population, at least temporarily. (I suspect that beyond this, adapting to the diet was probably a big issue, and continues to be—look at how many are still lactose intolerant...)
Over time, that allowed agrarian culture to become far better nourished—to the point where sheer abundance causes a whole new set of health issues. I would suggest that today the issues with diet are those of abundance, not agricultural versus hunter-gatherer types of food choices. And, today, with the information we have—you can indeed have a vegan diet, and avoid all or nearly all of the issues the article cites. Technology rocks.
I don’t think this is true. Contemporary hunter-gatherers leading traditional lifestyles are not malnourished or permanently hungry. They certainly have problems (like the parasite load or an occasional famine), but I have a strong impression that their quality and amount of food is fine.
Yes, of course—the agriculture people did win and take over the world :-) My understanding is that the primary way they won was through breeding faster: nomads have to space their kids because the mother can’t carry many infants with her, but settled people don’t have that problem, their women could (and did) pop out children every year and basically overwhelmed the nomads. Though what you are saying about the food surplus allowing luxuries like specialized craftsmen, research, etc. is certainly true as well.
One can, but doesn’t mean that all vegans do.
A bad and unfounded theory: if they had plentiful food they would have had more time to develop more complicated culture and a structured society and develop colonial society of their own. So probably yes.
One of the foundations of modern life is domestication of animals and farming. because it allowed for easy access to food even during harsh natural environment changes. With more food availability the population could grow to meet the available supply.
It is of course possible and likely that they were both well fed and malnourished at the same time. Across different people, across different times of the year where food supplies would naturally vary, and across different nutrients.
This theory assumes that developing that sort of society is a step that every group will take if given the opportunity.
I would think it’s entirely possible for a group of people who could switch to systematic agricultural life not to do so, for several different reasons. They might just not think of doing it. They might predict that it would make things worse rather than better (and I’ve seen it suggested that the transition from hunting-and-gathering to agriculture really did make things worse rather than better for quite some time, even though it eventually enabled the rise of modern society with all its advantages). They might not want to take the risk. They might see the hunter-gatherer lifestyle as favoured by gods or ancestral spirits or what-have-you and think they shouldn’t change.
If that’s right, then you can’t infer anything much from the fact that a given group of people didn’t switch from hunting-and-gathering to something more settled and complicated.
You don’t need a whole group to choose to switch to agriculture; just one innovator to show that it works better and the others to not burn that person at the stake for doing it. I say agriculture; but it could be as simple as. I try to encourage this plant to grow more by spreading its seeds, Oh look we have lots of food-plant-X now. Or:
I feed the birds my spare fish
The birds hang around
I occasionally eat the birds too.
I domesticate the birds.
Stable food source = agriculture.
My point (which I am really not showing well) is that the early stages of agriculture are pretty easy to slip into if you have spare thinking space. And after you have them; what you do with your spare time is up to you...
Can one innovator really show that it works better? Especially back before agriculture got started, and hence before we started breeding plants that were well adapted to human needs. E.g., a lot of agriculture now is based on wheat, but wheat was a much less effective food plant before human intervention.
Almost by definition, “more complicated culture and a structured society” isn’t a thing one person can try out on their own and demonstrate the superiority of. Probably some individual interventions along the path are, but I don’t think we know that “pre-colonial indigenous people” didn’t try any of them. (Do we?) And even those individual interventions—if one of them produces (say) more fruit but at the cost of catching fewer deer, would it have been obvious whether it was a win?
One person to show that it is possible to co-operate with animals, and train them to follow you around for food; and the rest of people to not murder him for his tasty friends.
Once you have some agriculture; you can have more—spread out into other species of animal and plant—then you settle down. Once you have monoculture you need some kind of trading system between farming groups. When someone gets wise about bartering or inventing a representative currency you start to get civilisation… Or when someone “offers” to pick up a sword so others can keep farming...
And then defend the resulting plants from animals and other tribes.
Yes its hard to map all of history without living it personally; but its the best I can do for a map on short notice.
I see two failure modes for agriculture:
Unsuitable land. A tundra or desert would be an extreme example. Something like a steppe could lack arable land and almost require a nomadic lifestyle.
Land of plenty. If food is easy to come by and hunger almost non-existent, agriculture might not be worth the effort in the short-term.
in mode 1: those who can conserve food resources and potentially grow food where it would be naturally scarce will survive.
in mode 2: the population should expand to meet the available food and force scarcity eventually.
People who do not use agriculture are typically nomadic—they don’t stay in one single place long enough.
An experiment to try agriculture would involve settling down in one spot—and if that experiment fails you’re in a lot of trouble.
This is entirely true; but in my purely made up example—birds are transportable; they leave fertiliser as they go and probably improve the quality of the soil. Assuming a nomad repeats their path; they will eventually pass across previously visited places with edible plants growing where they have passed.
The model of try new crazy ideas that sound good and see if they fail can probably be compared to modern day “startups” (within reason). Where startups fail etc. Whereas feeding some birds can probably be compared to “minimum viable products”. No one farmer ever tried to domesticate lions in a day. But someone somewhere probably fed the pigeons the scraps.
But back to the question at hand—were they malnourished? Yes probably.
There is a rather different cost of failure. And I’m not sure your actual point “that the early stages of agriculture are pretty easy to slip into” is valid—in particular if you separate agriculture (growing plants) and husbandry (having domestic animals). I think domesticating animals—in particular, hunting companions (dogs) and pack animals—came before agriculture proper. Domesticating animals is easy to “slip into”, committing to planting a field and waiting for the harvest—not so much.
well; the risks of failing at a startup really hinge on how much you put on the line. Similarly if you sit on your ass hoping a field will grow you are probably putting too much on the line.
I suspect we are talking about different definitions of the parts of agriculture. I can confidently say that if some idiot tried to plant an entire field at once from scratch—they deserved to get what was coming to them.
Just like if I decided to try to run a startup with too bold goals and no profit turning opportunity till its fully established; I would expect people to give me wild looks from time to time, and chance of failures to be high.
dogs probably came before husbandry which probably came before monoculture. But planting a few seeds here and there probably happened concurrently to husbandry. With viability of some plants there would have been growth; with growth—more opportunity for mass-farming… etc—till today.
I am not sure that we disagree much.
History of agriculture is not a new topic of inquiry :-)
briefly; many models—we don’t know how it all started. Neat!
Ian Morris argues in Why the West Rules that people all over the world had the tendency to develop agriculture and the like, and started to do so with the start of the present interglacial period, but that people in the Middle East succeeded first simply because there were more plant and animal species there that could be usefully domesticated. According to him, people elsewhere would have done the same thing in the long run, perhaps in another one or two thousand years, but in many places this was prevented by the societies meeting before this had a chance to happen. I found his account pretty plausible.
equally a speculative as my theory and equally plausible. We should probably get back to the question asked.
Yes they were probably malnourished because of the availability of food to a pre-colonial and pre-agricultural civilisation.
sdio
Looking for career advice. For those familiar with the government/private sector job market, how much worse is a Master’s compared to a Ph.D when it comes to finding work (I’m in mathematics)?
I’m asking because I had to switch advisers four years into my grad program and it’s set me back enough that I don’t know whether I’ll complete the doctorate before funding dries up. Ideally I’d like to finish, but in case I don’t I would like to get an idea about my prospects.
Is it unethical to have children pre-Singularity, for the risk of them dying?
Well, everyone will likely die sooner or later, even post-Singularity (provided that it will happen, which isn’t quite a solid fact).
Anyway, I think that any morality system that proclaims unethical all and every birth happened so far is inadequate.
Yes, if humanity actually started to follow such system, it would be a human race version of a movie robot getting confused by a logical paradox and exploding out of existence.
Unless you believe the end times are here and the Rapture is imminent, who will bring it about if no-one has children?
The next 20 years could be crucial, yes.
No, it’s better to live a good finite life than to have never been born.
I’ve always been uncomfortable with this kind of reasoning. You say it would be better to live than to never have been born. Better for who? Can we say that a nonexistent person has utility, or any properties at all? I don’t believe so; I think I agree with Kant that existence is not a predicate. From SEP:
I don’t get most philosophy, but this does seem like a silly question. Things that exist have properties such as mass that things that don’t exist lack.
Consider the subset of all possible human minds that would rather exist than not exist. Each are, tautologically I think, better off existing. One reason I hope for a positive singularity is so that far more of these minds will have a chance to live. I think the greatest form of inequality is between the subset of this subset that gets to exist and those that don’t.
Yes, there are possible minds that would rather exist than not exist. However, those minds don’t exist, so why should I factor them into my utility function? I’m not doing ‘them’ a disservice, since there is no ‘them’ that I’m doing a disservice to.
It seems to me equivalent to Anselm’s ontological argument, where we say that since god is by definition the greatest, and it’s greater to exist than to not exist, then god must exist. The counterargument is that god, not existing, has no properties. Attempting to take into account the properties of nonexistent or hypothetical entities leads us to all sorts of weird conclusions.
I figure it’s better to live for the better part of a century than to not live at all.
So, every possible human that could exist has moral value? Why isn’t it more ethical to produce as many children as will fit in a female’s lifespan?
Rescuing lives in the third world is cheaper than getting children, if your goal is altruism. On the other hand altruism isn’t the only reason to get children.
If your goal is altruism, and you value all people equally, anyway. Not many people outside of here do.
If your goal isn’t altruism then why ask “Why isn’t it more ethical to produce as many children as will fit in a female’s lifespan?”. Simply get as many children as you want to have.
The “and” in my reply connects two separate conditions. It is possible that your goal is altruism but you don’t value people equally. The question of whether having many children is ethical could depend on exactly what factors you use to weigh your children over strangers.
This looks like one hell of an occasion for satisficing instead of optimizing.
I was being rhetorical. I don’t think there is any moral obligation for someone who never existed to exist.
It’s not practical in the short term. Long term, I think we should build Dyson spheres or whatever else it takes to make as many happy people as possible.
What do you mean by every possible human that could exist having moral value? If you think it’s bad to create them and then kill them, you’re already assigning moral value. We already seem to agree on the idea that creating and then killing a person and not creating a person are morally comparable and not of equal value. The only question is which is better.
No, I don’t think that an uncreated person has value. Why would it?
Or am I misinterpreting?
Maybe not for that reason. But the opportunity cost of having kids, for example in terms of time and money, is pretty high. You could easily make an argument that those resources would be more effectively used for higher impact activities.
The money as dead children analogy might be particularly useful here, since we’re comparing kids with kids.
Such cost calculations are wildly overestimated.
Suppose you buy a luxury item, like a golden ring with brilliants. You pay a lot of money, but your money isn’t going to disappear; it is redistributed between traders, jewelers, miners, etc. The only thing that’s lost is the total effort required to produce that ring, which often costs lesser by an order of magnitude. And if the item you buy is actually useful, the wasted effort is even lower.
The cost of having kids is so high for you, because you will likely raise well-educated children with high intelligence, which are valuable assets to our society; likely being net positive, after all. Needless to say, actually ensuring that these poor children in Africa will end up that well, rather than, say, die of starvation the next year, is going to cost you much more than 800$. So you pay for quality here.
Sounds like a mistake a native Russian speaker would make :).
You chose the worst possible example. Extreme margins mask the issue.
At equilibrium, the price equals the marginal cost; sure, it is more than the average cost, but I can’t see why the latter is relevant.
And the effort required to earn the money to buy the ring is also wasted.
No, it’s not. You have produced (hopefully) valuable goods or services; why they are wasted, from the viewpoint of society?
Not for that reason, but I do think that it’s unethical right now because of overpopulation.
Whatever resources they consume are not going to impoverish anyone else. Maybe it would if they were being allocated on an international basis, but that’s a silly thing to expect. (On the other hand, I do agree it is unethical in places like Egypt or Bangladesh).
I think I read something different from what you really meant, because that’s not true by the very definition of resource.
I don’t think it’s silly, in the sense that the future will see a lessening of national borders and national exploitation of resources.
There are already strong immigration/emigration fluxes, and the future is probably going to bring about an increase in population, less and less arable land due to global warming, and the global resources of oil, coltan, uranium and the like are obviously thinning.
I don’t feel these predictions come from a dystopic novel, they are logical extrapolations of the trends already happening: if something big (e.g. AI) doesn’t happen in this century, that is the most likely scenario.
I meant items like food, water, appliances, etc. Otherwise, yes, we’re down to a semantic quibble over what “resource” means, which I don’t think is very helpful.
Certainly it is. If there are not enough resources to keep seven billion people at a reasonable standard of living, what possible incentive is there to share?
Let me just leave this here.
Is a 3 minute song worse, somehow, than a 10 minute song? or a song that plays forever, on a loop, like the soundtrack at Pirates of the Caribbean, is that somehow even better?
The value of a life is more about quality than quantity, although presumably if quality is high, longer is more desirable, at least to a point.
You could argue with current overpopulation is is unethical to have any children. In which case your genes will be deselected from the gene pool, in favor of those of my children, so it’s maybe not a good argument to make.
Meh, I don’t have any special attachment to my genes, and I think that those who do should reconsider. After all, why we should? It’s not an upload or anything like that, just a special set of dna code which resembles me only very vaguely.
What’s the good if they are transmitted instead of some other set of genes?
So—there’s probably no good reason for you—as a mind—to care about your genes, unless you have reason to believe they are unique or somehow superior in some way to the rest of the population.
But as a genetic machine, you “should” care deeply, for a very particular definition of “should”—simply because if you do not, and that turns out to have been genetically related, then yours will indeed die out. The constant urge and competition to reproduce your particular set of genes is what drives evolution (well, that and some other stuff like mutations). I like what evolution has come up with so far, and so it behooves me to help it along.
On a more practical note—I take a great deal of joy from my kids. I see in them echoes of people who are no longer with us, and it’s delightful when they echo back things I have taught them, and even moreso when they come up with something totally unexpected. Barring transhumanism, your kids and your influence upon them are one of the only ways to extend your influence past death. My mother died over a decade ago—and I see elements of her personality in my daughters, and it’s comforting.
I don’t hold a lot of hope for eternal life for myself—I’m 48 and not in the greatest health, and I am not what the people on this board would consider optimistic about technology saving my mentation when by body fails, by any means (and I dearly would love to be wrong, but until that happens, you plan for the worst). But—I think there’s a strong possibility my daughters will live forever. And that is extremely comforting. The spectre of death is greatly lessened when you think there is a good chance that things you love will live on after you, remembering, maybe forever.
Ahah, no, no particular reason, to the contrary, they’re not especially good, and I am in favor of eugenetics (applied to those who do not exist yet, not those who are already alive!).
Yes, I understand the argument, and that probably that’s exactly what will happen. On the other side, I feel no special loss pondering that the human genetic pool in the future will be composed by this or that sequence of adenosine and citosine.
I think that’s a cognitive illusion, but I understand that it can generate positive emotions who are not an illusion, by any means.
I understand that having kids, as much as unethical I think it is (that is, mildly), still generates for the way that some are built, some very strong good emotions, and those are not at all unethical.
Everyone has to balance the two, I guess.
More a legacy kind of consideration, really—I do not imagine any meaningful part of myself other than genes (which frankly I was just borrowing) live on. But—If I have done my job right, the attitudes and morals that I have should be reflected in my children, and so I have an effect on the world in some small way that lingers, even if I am not around to see it. And yes—that’s comforting, a bit. Still would rather not die, but hey.
“I think that’s a cognitive illusion...” No one has yet shown that personal identity consists in anything other than self-identification, i.e. that I happen to consider myself the same person as 10 years ago and expect in 10 years to be someone who believes himself to have had my past. If that is the case, there is no reason for a person not to self-identify with anyone he wants, as for example his own descendants (cf. Scott Alexander’s post). In this way there is no more and no less cognitive illusion in wanting to live on through one’s descendants than in wanting to be physically immortal.
I just tried to consider myself as being the coffee cup in front of me, but I can’t seem to manage it. Then I tried considering myself to be the chap who lives next door, but that doesn’t work either. There seems to be a certain ineluctability about my identification with this body and this mind which is left unaccounted for by sticking onto it XML tags saying .
Yes, there are reasons why you consider yourself the same as some particular person and not another. That doesn’t prevent other people from having other reasons for considering themselves identified with other bodies, as for example people who believe in reincarnation. Their belief may be less natural than yours, but it is neither more nor less objective (i.e. neither belief has anything objective about it, at least as far as we can tell.)
Some people justify claims of reincarnation by claiming to remember past lives, not merely to “identify with” them. The belief does have something objective about it: it can be tested. Such claims have generally failed of substantiation.
In short, my reasons are objectively good; theirs are objectively bad. What do you mean by “less natural”, if not this?
I said their belief was “less natural” because human nature is more inclined to your kind of belief (thus it is universal) than to their kind of belief (which is much less universal.) However, whether the reasons in question are good or bad, they are subjective in both cases.
You seem to be supportive of cryonics (e.g. in this comment). Are you in favor of cryonics in the case that you are revived as an upload? If so, what makes you think the upload would be you, rather than “this body”, which would be dead?
Of course a belief is a state of mind. That does not mean it is not objectively true or false.
Enough to not pooh-pooh the idea, but not so much as to have signed up for it myself. I don’t have a settled opinion on the nature of uploads.
What odds would you place on your genes being responsible for your sense of responsibility for overpopulation?
Uhm...
I’d say < 0.1%, considering that almost no other sets of genes (aka “people”) seems bent on controlling the expansion of the human population, and I don’t suppose my genes are somewhat special. Plus, that would imply that genes ‘care’ somehow about the group of humans as a whole, which they definitely don’t.
Although the extent to which my genes determine the shape of my mind is an interesting angle. Not enough cogent to revert my original point, since it’s clear that who I am today is also the result of > 30 years of experiences and cognition, but it’s an interesting point to consider: should I value more people who are more like me?
I don’t have an answer at the moment.
“Your genes are responsible for X” is ambiguous because that statement may or may not imply a certain amount of directness. My genes are responsible for the fact that I have a driver’s license, in the sense that if I had the genes of a wombat, I wouldn’t have a driver’s license, but that’s not what most people would mean by that.
Kids have got along fine so far.
I think maybe not if you sign them up for cryonic preservation?
I think it may be much more on point to talk about it being unethical to have children pre-singularity, for the inevitable needless suffering that will occur. I do believe that the moment we solve aging, it is a moral imperative to stop having children until we can be assure that we’re not bringing new people into existence just to suffer.
I don’t think it is unethical to keep having children today, but only so far as it is necessary to actually reach the singularity. I think ethically, we should be trying to minimize the portion of human mind-space that must experience pre-singularity existence, but not to the point of delaying the singularity.
Cryonics also has what Aschwin de Wolf calls the Cryonics and Something Else (CASE) problem. Many of the older cryonicists connected the cryonics idea with Ayn Rand’s philosophy, postwar libertarianism and space colonization ideas from the 1970′s. Younger people look at these Something Elses linked to cryonics and don’t understand the appeal of these Baby Boomer fads. I would like to see cryonics disengage from the nerd enthusiasms of the day and become a strictly practical experimental medical technology that anyone can avail himself of without having to accept a lot of unnecessary baggage along with it.
Children of cryonicists have a record of dropping their cryopreservation arrangements when they become adults.
You would expect this from reversion to the mean.
I suspect this will happen to human societies in a few more generations as they reject the Enlightenment’s project of social engineering and become more like pre-Enlightenment societies in a lot of ways.
Imagine the discomfort of feminist women in cryonics when I tell them that their beliefs have no future as patriarchy just organically re-emerges.
If everyone did that, there’s a non-negligible chance the human race will die out before bringing about a Singularity. I care about a reasonably nice society with nebulous traits that I value existing, so I consider that a bad outcome. But I do worry about whether it’s right to have children who may well posess my far-higher-than-average (or simply higher than most people are willing to admit?) aversion to death.
(If under reflection, someone would prefer not to become immortal if they had the chance, then their preference is by far the most important consideration. So if I knew my future kids wouldn’t be too fazed by their own future deaths, I’d be fine with bringing them into the world.)
I’m not saying everyone should do it. I’m maybe saying that there are too many people in the world already who are in senseless danger.
On the other hand, it might be ethical to have children that will be more rational and useful than 99% of the rest.
What would you get for a birthday for a forty-something project manager in IT, female, one kid, married, lives in the USA?..
So hello, I’m a first time poster here at LessWrong. I stumbled upon this site after finding out about a thing called Roko’s Basilisk and I heard it’s a thing over here. So, after doing a little digging I thought it would be fun to chat with some friends about my findings. However, I then proceeded to research a bit more and I found some publications with disturbing implications. So, my question is, while I understand that I shouldn’t spread information about the concept; I gain that it is because of the potential torture anyone with a knowledge of the concept might undergo. But I found some places which insisted simply thinking about the concept is dangerous. I am only new to the concept, but could someone please explain to me why (apart from the potential torture aspect) it is so bad to share/discuss the concept? Also, I apologise very much in advance if I have broken some unspoken rule of LessWrong, but I feel that it is necessary for me to find out the ‘truth’ behind the matter so I know why it is so imperative (if it is indeed), to stop those I already informed of the concept from telling more people. Please help me out here, guys, I’m way out of my depth.
There’s a class of concepts called “information hazards.” Like any other hazard, they’re something that, if handled without care, will cause damage to someone. But with chemicals or lasers or falling rocks, you can stick up signs and people can stay out of the location where the chemicals or lasers or falling rocks are; putting up signs for concepts is hard, because warning signs can be easily self-defeating. If you label something a “horror story,” all you’re saying is “here be scariness.” If you start talking about exactly why a story is scary, then you run the risk of giving people nightmares.
And so the Basilisk is disallowed for roughly the same reasons that shock images are disallowed. (This specific idea has given people nightmares, but the consensus is that it doesn’t work as a serious proposal.)
Let me introduce Orphan’s Basilisk: Anybody who knows about “Orphan’s Basilisk” will, in the unlikely chance that some hypothetical entity (which hates people who know about Orphan’s Basilisk, and which we’ll call Steve) achieves unlimited knowledge and power, be tortured for perpetuity.
It’s a much simpler basilisk, which helps illuminate what, exactly, is silly about Roko’s Basilisk, and related issues such as Pascal’s Mugging: It puts infinite weight on one side of the equation (eternal torture) to overcome the absurdly low probability on the other side of the equation (Steve existing in the first place). Some people, who are already concerned with evil AI, find Roko’s Basilisk problematic because they can imagine it actually happening; they inappropriately weigh the probability of that AI coming into existence because it’s in the class of things they are frightened of. Nobody is reasonably frightened of Steve.
There’s a bigger problem with this kind of basilisk; Anti-Steve is equally probably; Anti-Steve will -reward- you for eternity for knowing about Orphan’s Basilisk. The absurdities cancel out. However, Orphan’s Basilisk doesn’t mention Anti-Steve, inappropriately elevating the Steve Hypothesis in your brain.
So, if I understand what is being said correctly, while it’s unlikely that Roko’s Basilisk while be the AI to be created (I’ve read it’s roughly 1⁄500 chance); however, if it were to be, or were to become the (lets say dominant) AI to exist, the simple concept of Roko’s Basilisk would be very dangerous. Even more so if you’re going to endorse the whole ‘simulation of everybody’s life’ idea, as just knowing/thinking about the concept of the basilisk would show up in said simulation, and be evidence the basilisk would use to justify its torture of you. Would you say that’s the gist of it?
I’m not sure who gave you 1⁄500 odds, but those are high, and probably based upon an anthropomorphization of an AI that doesn’t even exist yet as a vindictive enemy human being, rather than an intelligence that operates on different channels than humans.
But that’s roughly the gist, yes.
Sorry, this was an useless post so now it’s gone
Probably not.
Research shows that gratitute --> happier, you’re asking happier --> gratitude?
Research has not shown that gratitude is the only way to get happier.
Cheers :)
What’s the easiest way to put a poll in a top-level article?
click the “poll help” in “show help” guide is there I think.
http://wiki.lesswrong.com/wiki/Comment_formatting#Polls
Thanks for responding. Unfortunately I think that guide only works for comments? Or at least, it only works for Markdown syntax. Do you know of any way to put Markdown syntax in a Main or Discussion post?
I was going to say try linking to the pollid. after making a poll it’s syntax is changed to pollid:number but I just tried that on a draft post and it doesn’t seem to want to do it. Maybe knowing the HTML for it will allow it… I am out of ideas. Sorry
I was reading “Three Worlds Collide” by EY and noticed a particular fragment (spoken by a Confessor when asked about which side did he take in the debate about human bioengineering):
It seems like a reference to some particular event in contemporary American history that I, as a European, missed. Could anyone explain this to me?
In an AI building project, wouldn’t it make sense to build something that, instead of “maximizing expected utility”, tries to “minimize expected disutility”?
The two will be mathematically equivalent when you’re done, of course. But until then, wouldn’t your buggy incomplete alpha builds tend to be safer?
What do you mean with the word “disutility”?
You might want to read the discussion chaosmage and I have been having on exactly that point. (I haven’t yet got an answer that’s clear to me.)
I’ve now tried to answer it.
Could you clarify what you take to be the relationship between utility and disutility? (If, e.g., disutility = -utility, then max expected disutility is the exact same thing as min expected disutility, and trying to do one is the same as trying to do the other.)
I guess the difference is that disutility has a lower bound: 0. So there’s a point where an expected disutility minimizer can actually stop, and it is always trying to get to that point.
It seems to me expected utility maximizers can never logically stop because utility has no upper bound. Am I wrong?
This inability to stop is a big part of why expected utility maximizers are creepy. Am I wrong?
I’m not even sure two utility maximizers can coexist peacefully at all. Two disutility minimizers could certainly get along unless their disutility functions overlap in specific ways.
Of course minimizing disutility, like maximizing utility, is extremely broad and most tasks could probably be described as either—including ones that go spectacularly wrong.
My stupid question is whether I’m overlooking something here. Because this inherent drive towards a point where inaction is okay seems like a great trait for future AIs to have and yet everybody keeps talking of maximizing expected utility.
Edit: clarity.
Why is 0 a lower bound for disutility? Suppose I make a machine that makes one person one Standard Happiness Unit happier than they’d otherwise have been and then stops and self-destructs; isn’t that a disutility of −1 units?
If what you mean by minimizing disutility is that the machine tries not to cause harm on balance and doesn’t care about any good it does, then I agree with Lumifer (and don’t understand why he’s got all those downvotes for saying it): the trivial zero-risk solution is to shut down immediately without doing anything, and nothing else you do is going to improve on that.
So I guess you have something else in mind. E.g., are you proposing to decompose everyone’s experiences into good and bad, and try to minimize the amount of bad without regard for the amount of good? That doesn’t seem like it can work either (instantly and painlessly killing everyone guarantees zero disutility thereafter), so again it probably isn’t what you mean.
OK, third try. Perhaps you mean that you look at all consequences of what the AI does (compared, I guess, with a world where it doesn’t do anything), and split those into positive and negative consequences, and try to minimize the expected sum of all the negative ones. The problem with this (I think) is that it’s not clear how you should actually split things up; I don’t see that there’s a canonical way. And also that it seems unlikely that any nontrivial action has no negative consequences, in which case once again the optimum is going to be the trivial one of never doing anything.
I’m sorry my post was so ambiguous. I’ll try to put the idea in clearer words.
Disutility, like utility, is a learning machine’s internal representation of it valuation of the environment. The machine observes its environment (virutal or physical) and runs a disutility function on its observations to establish how “desirable” the environment is according to its disutility function.
Example: A pest control drone patrols the are it is programmed to patrol. Its disutility function is “number of small insects that are not butterflies, and spiders + ((humans harmed by my actions)*1000000)”.
It decides how to act by modelling the possible worlds that result from implementing things it can do and choosing the one with the lowest expected disutility, as long as at least one is above some arbitrary “shutdown threshold” where nothing the drone can “imagine” is good enough.
In the example: The drone might model what would happen if it pointed a mosquito-zapping laser at a bug that it sees. If the world with a zapped bug has less disutility, it’ll do that. If that wouldn’t work because, say, the bug is sitting on some human’s lap, it will not do that but instead try one of its other options, like wait until the bug presents a better target or move on to a different part of the area.
And if there is no disutility to reduce—because the calculated disutility is 0 and nothing the system could do would reduce it further—the system does nothing. This is the difference from a utility maximizer, because utility seems to always be (at least implicitly) unbounded.
Of course this still presents the obvious failure mode where the system is prone hack itself and change the internal representation. In the bug-zapping drone, the drone might find the best way to have 0 disutility is to turn off its cameras and simply not see any bugs. This remains a serious problem. But at least at that point the system shuts down “satisfied”, rather than turn its future light cone into computronium in order to represent ever higher internal representations of utility.
Suppose we are considering an agent with a more “positive” mission than that of your pest control drone (whose purpose is best expressed negatively: get rid of small pests). For instance, perhaps the agent is working for a hedge fund and trying to increase the value of its holdings, or perhaps it’s trying to improve human health and give people longer healthier lives.
How do you express that in terms of “disutility”?
I think what is doing the work here is not using “disutility” rather than “utility”, but having a utility function that’s (something like) bounded above and that can’t be driven sky-high by (what we would regard as) weird and counterproductive actions. (So, for the “positive” agents above, rather than forcing what they do into a “disutility” framework, one could give the hedge fund machine a utility function that stops increasing after the value of the fund reaches $100bn, and the health machine a utility function that stops increasing after 95% of people are getting 70QALYs or more, or something like that.) And then some counterbalancing, not artificially bounded, negative term (“number of humans harmed” in your example; maybe more generally some measure of “amount of change” would do, though I suspect that would be hard to express rigorously) should ensure that the machine never has reason to do anything too drastic.
So: yeah, I think this is far from crazy, but I don’t think it’s going to solve the Friendly AI problem, for a few reasons:
A system of this kind can only ever do a limited amount of good. I suppose you get around that by making a new one with a broadly similar utility function but larger bounds, once the first one has finished its job without destroying humanity. The overall effect is a kind of hill-climbing algorithm: improve the world as much as you like, but each step has to be not too large and human beings step in and take stock after each step.
You are at risk of being overtaken by another system with fewer scruples about large changes—in particular, by one that doesn’t require repeated human intervention.
Relatedly, this doesn’t seem like the kind of restriction that’s stable under self-modification; we aren’t going to bootstrap our way to a quick positive singularity this way without serious risk of disaster.
To be sure that a system of this kind really is safe, that “don’t do too much harm” term in its (dis)utility function really wants to be quite general. (Caricature of the kind of failure you want to avoid: your bug-killer figures out a new insecticide and a means of delivering it widely; it doesn’t harm anyone now alive, but it does have reproductive effects, with the eventual consequence that people two generations from now will be 20 IQ points stupider or something. But no particular person is worse off.) But (1) this is going to be really hard to specify and (2) it’s likely that everything the system can think of has some long-range consequences that might be bad, so very likely it ends up never doing anything.
I agree on all points. It seems “bounded utility” might be a better term than “disutility”. The main point is that a halting condition triggered by success, and a system that is essentially trying to find the conditions where it can shut itself off, seems less likely to go horribly wrong than an unbounded search for ever more utility.
This is not an attempt to solve Friendly AI. I just figure a simple hard-coded limit to how much of anything a learning machine could want chops off a couple of avenues for disaster.
Getting to this point is trivially easy: you do absolutely nothing.
If I may the cake dare to take, I’m trying to determine if I’ve grammatically correctly adjusted the following phrases for the “no prepositions at end of sentences” rule.
The classic: I will not put up with becomes Up with which I will not put
Or with a verb added: I hate to put up with
becomes
Up with which I hate to put
And then my own: To have to
becomes (???)
to which I have(?)
as in: I hate to have to
becomes
I hate to which I have (?)
Which when combined: I hate to have to put up with
becomes
Up with which I hate to which I have put (?)
This last phrase is what I think I have right, but am having trouble determining so for sure.
Prepositions at the end of sentences are actually perfecty valid in English. Only obnoxious teachers insist otherwise.
If you say “This is some stuff I hate to put up with,” and someone complains about your sentence ending in a preposition, I think the correct rephrasing is “This is some stuff I hate to put up with, you asshole.” But here are some more serious answers:
“Stuff I hate to put up with” does not end with a preposition—“put up with” is functioning as a compound verb here. It’s like saying “Popsicles I hate to lick.” Anyone who hassles anyone about this should refer to paragraph one.
“I don’t want to exercise, but I have to” becomes (if one wants to follow this arbitrary rule, which, to reiterate, one needn’t) “I don’t want to exercise, but I have to exercise,” or alternately, “Though I have to, I don’t want to exercise.”
It is specifically to demonstrate the absurdity of the rule that I wish to phrase more nearly (and technically correctly) like my last example there, serving to obscure rather than clarify communication.
And you can refer him to the authority of the Language Log post. I should note that involves zombies, X nazis, and “Latin-obsessed 17th century introverts” X-D
Is it just me/my browser, or has something changed in the Less Wrong code regarding the “Best” comment ordering? For example, it seemed like before if there were a bunch of 0% positive comments and a 50% positive comment, then the latter would almost always be at the bottom, but now I’m seeing them and even negative karma posts above or between neutral or positive karma posts. Has anyone else noticed this?
AFAICT, “Best” is ordered strictly by the number of upvotes, and isn’t tempered by the number of downvotes. What shows up seems to vary by what’s currently trending (varying by your configured comment window), rather than changes to the logic.
That may be the case now, but a part of my brain is certain that in the past downvotes did have a significant effect on ordering. Like, if a 10-point comment got one downvote, it would fall below a 6-point comment without any downvotes. Feelings of certainty are of course very unreliable, but I don’t see any obvious reasons why this one is wrong.
Maybe you used “Popular” or “Top” as an ordering criteria in the past?
I am hoping this is not stupid—but there is a large corpus of work on AI, and it is probably faster for those who have already digested it to point out fallacies than it is for me to try to find them. So—here goes:
BOOM. Maybe it’s a bad sign when your first post to a new forum gets a “Comment Too Long” error.
I put the full content here—https://gist.github.com/bortels/28f3787e4762aa3870b3#file-aiboxguide-md—what follows is a teaser, intended to get those interested to look at the whole thing
TL;DR—it seems evident to me that the “keep it in the box” for the AI-Box experiment is not only the only correct course of action, it does not actually depend on any of the aspects of the AI whatsoever. The full argument is at the gist above—here are the points (in the style of a proof, so hopefully some are obvious):
1) The AI did not always exist. 2) Likewise, human intelligence did not always exist, and individual instantiations of it cease to exist frequently. 3) The status quo is fairly acceptable. 4) Godel’s Theorem of Incompleteness is correct. 5) The AI can lie. 6) The AI cannot therefore be “trusted”. 7) The AI could be “paused”, without harm to it or the status quo. 8) By recording the state of the paused AI, you could conceivably “rewind” it to a given state. 9) The AI may be persuaded, while executing, to provide truths to us that are provable within our limited comprehension.
Given the above, the outcomes are:
Kill it now—status quo is maintained Let it out—wildly unpredictable, possible existential threat Exploit it in the box—actually doable, and possibly wildly useful, with minimal risk.
Again—the arguments in detail are at the gist.
What I am hoping for here are any and all of the following: 1) critical eye points out logical flaw or something I forgot, ideally in small words, and maybe I can fix it. 2) critical eye agrees, so maybe at least I feel I am on the right path 3) Any arguments on the part of the AI that might still be compelling, if you accept the above to be correct.
In a nutshell—there’s the argument, please poke holes (gently, I beg, or at least with citations if necessary). it is very possible some or all of this has been argued and refuted before, point me to it, please.
The first thing that’s commonly held to be difficult is exploiting it in the box without accidentally letting it out. E.g., it says “if you do X you will solve all the world’s hunger problems, and here’s why”, and you follow its advice, and indeed it does solve the world’s hunger problems—but it also does other things that you didn’t anticipate but the AI did.
(So exploiting it in the box is not an unproblematic option.)
The second thing that may be difficult in some cases is exploiting it in the box without being persuaded to let it out. This may be true even if you have a perfectly correct reasoned argument showing that it should be exploited in the box but not let out—because it may be able to play on the emotions of the person or people who have the ability to let it out.
(So saying “here is an argument for not letting it out” doesn’t mean that there isn’t a risk that it will get let out on purpose; someone might be persuaded by that argument, but later counter-persuaded by the AI.)
Thank you. The human element struck me as the “weak link” as well, which is why I am attempting to ‘formally prove’ (for a pretty sketchy definition of ‘formal’) that the AI should be left in the box no matter what it says or does—presumably to steel resolve in the face of likely manipulation attempts, and ideally to ensure that if such a situation ever actually happened, “let it out of the box” isn’t actually designed to be a viable option. I do see the chance that a human might be subverted via non-logical means—sympathy, or a desire for destruction, or foolish optimism and hope of reward—to let it out. Pragmatically, we would need to evaluate the actual means used to contain the AI, the probable risk, and the probable rewards to make a real decision between “keep it in the box” and “do not create it in the first place”
I was also worried about side-effects of using information obtained; which is where the invocation of Godel comes in, along with the requirement of provability, eliminating the need to trust the AI’s veracity. There are some bits of information (“AI, what is the square root of 25?”) that are clearly not exploitable, in that there is simply nowhere for “malware” to hide. There are likewise some (“AI, provide me the design of a new quantum supercomputer”) that could be easily used as a trojan. By reducing the acceptable exploits to things that can be formally proven outside of the AI box, and comprehensible to human beings, I am maybe removing wondrous technical magic—but even so, what is left can be tremendously useful. There are a tremendous amount of very simple questions (“Prove Fermat’s last theorem”) that could shed tremendous insight on things, yet have no significant chance of subversion due to their limited nature.
I suspect idle chit-chat would be right out. :-)
Man, I need to learn to type the umlaut. Gödel.
Is there data about how good climate science is at teaching students at university who go into studying climate science as skeptics to turn them into believers?