Stupid Questions (10/27/2014)
I think it’s past time for another Stupid Questions thread, so here we go.
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Please respect people trying to fix any ignorance they might have, rather than mocking that ignorance.
A question that has been asked before, and so may be stupid: What concrete examples are there of gains from CfAR training (or self-study based on LessWrong)? These would have to come in the form of very specific examples, preferably quantitative.
E.g. “I was $100,000 in debt and unemployed for 2 years, and now I have employment earning twice what I ever have before and am out of debt.”
“I never had a relationship that lasted more than 2 months, but now am happily married.”
“My grade point average went up from 2.2 to 3.8”
“After struggling to diet and exercise for years, I finally got on track and am now in the best shape of my life.”
etc.
I want to point out that this question doesn’t quite test for the right thing. One way an organization like CFAR can cause extreme life improvements is by encouraging participants to do extreme things generally in order to increase the variance of alumni outcomes after the workshops. That leads to potentially many extreme improvements but also potentially many extreme… disimprovements? And the latter are harder to notice because of survivorship bias. (There’s also regression to the mean to watch out for: you expect the population of CFAR workshop attendees to be somewhat selected for having life problems and those could just randomly improve afterwards.)
I expect the main benefit of CFAR training should be that it improves median outcomes; that is, that it improves alumni ability to consistently win. But this is hard to test for by asking for anecdotes: it would be better to do statistics.
You’re absolutely right. CfAR could get statistics by measuring quantifiable goals across its students: Grade point average, wealth, weight-loss; preferably with a control group. Until then, I’m just looking for any info I can get.
Fair enough. In that case, after my first CFAR workshop I lost 15 pounds over the course of a few months (mostly through changes in my diet) and started sleeping better (harder to quantify, but I would estimate at least an effective hour’s worth of extra sleep a night).
Within 3 months of attending a CFAR workshop I had left my job for one that I preferred and that paid 15% more. Within 6 months I had started exercising daily (previously once every few weeks), waking up consistently at 6am (previously varied anywhere from 8-10am), and eating significantly healthier (I also started eating vegetarian meals 2-3 times a week, previously 0). Independent of any of these concrete behaviors, I now have a very strong belief that I can intentionally construct/change my life and behavior in ways that will actually work.
Post hoc ergo propter hoc, etc. I have a fairly strong belief that attending a CFAR workshop and interacting with the alumni community has been at least partially causal in me improving my life, but I don’t think any of what I’ve said constitutes good evidence of that claim.
The first time I read the sequences, they were earth-shattering revelations that upset my entire life. The second time I read them, I could only make it few a few posts, because everything they said was obvious. So one gain for me is that existential/religious questions no longer bother me. I got answers that satisfied me, and I’ve moved on with my life. I suppose you could argue that I could have found the same answers somewhere else, but honestly, I doubt it.
Another big change is how I argue with people. One of my favorite Less Wrong ideas is the Taboo a word sequence. I use this all time. Whenever I encounter some vague statement like “Freewill is nonsense” or “We should live in a more just society” or something like that, I taboo the word and try something else. I approach words differently. I don’t know if this has improved my life, but I no longer feel as though I am incapable of expressing myself and my position.
I think I may be more rational. I know, without a doubt, that participating in LessWrong has caused me to self-identify as a rationalist, more than I would have if I had not come here. I feel that this self-identity is enriching and has made me a better person.
I try and do something I have never done before every other week. This habit was inspired and reinforced by Less Wrong. This has made me less afraid to do new things.
I often catch myself rationalizing. This didn’t used to happen, I think.
I’ve installed various mental habits from my time on Less Wrong. My habit of trying to notice rationalization as it happens, in combination with the “Tsuyoku Naritai” attitude, led me to become much more serious about physical fitness. In the last fourteen months, I’ve gone from moderately overweight and sedentary to lean, fit, and strong, with a minimum of the motivation issues that had previously prevented me.
I’ve never attended CFAR and am not especially deeply involved with LW, so this may not really be the kind of example you’re looking for.
OK, that’s a pretty specific benefit. Thanks.
Search “Rationality Diaries” on LW to see a huge archive of examples from recent years. (Those are places where users upload recent stories of victory from their lives.)
Thank you, those have some excellent answers to my question.
I worked as a neuroscience research assistant for 5 years. For the latter 3 of those years, I had wanted to leave that job and move on to something better, but had been unable to make a decision about what to pursue and to actually pursue it.
7 months after my first CFAR workshop, I started a new job making 25% more. There were other causal factors. Part of the motivation to do job searching was due to the fact that my research position would be ending, and part of the salary increase was due to the fact that I left academia. But I also credit CFAR training, including the follow-ups and the support I got from the community, as a significant cause of this success.
Other semi-quantifiable changes: -I keep a budget now. -I’m investing money for retirement each month. I was not investing any before. -I’ve learned 1.5 new programming languages, and have learned several new statistical analysis methods (consider that I was doing almost nothing in terms of job-relevant skill development prior to CFAR). -I’ve started a biweekly productivity meeting at my apartment (before I did not organize events other than the occasional party).
I’ve made many other changes in my life regarding habits, learning and practicing new things, and pushing the boundaries of my comfort zone. Perhaps the most important thing for me is that I no longer have the sense of being overwhelmed by life, or of there being large categories of things that I just can’t do. I’d say this is mostly the result of a cascade of changes that occurred in my life due to attending CFAR. And to repeat what nbouscal said, I feel like I can change my life in ways that will both work and feel good.
Most benefits (that I can recognize) aren’t particularly concrete. Some things (like the length of my relationship) could be in part of internalizing LW concepts but I have no way to measure it.
The biggest concrete impact is that I’ve grown more social. Not just within the aspiring rationalist community (also that’s also a big part of it) but by slowly expanding my comfort zone and having a bigger understanding on what I find important in different kinds of social encounters.
Not a very satisfactory answer, I assume. I think that’s because of two reasons:
My life wasn’t in a good spot the last year (or even two years) due to things that were more or less out of my direct control. Not the best circumstances to execute big life-changing plans.
A lot of the benefits I’ve gotten from hanging out on LW are relatively small. They do add up to larger increases in my wellbeing, but nothing particularly concrete.
Maybe I’ve got a better answer in a year.
If you want marketing materials or want to estimate the upper bound, the question is useful. If you think you’ll get some sort of a representative sample, um, sorry to disappoint you...
A story about how someone got a result usually contains more than just the result. It suggest what kind of techniques the person learned and makes responsible for the success. It can also tell you about the background of the person.
Case studies are useful to start thinking about a subject.
Quite right. There are always people who undergo spontaneous improvement, and if we cherry-pick success stories, we might just come up with those examples. I’d rather have a proper statistical study, and I think that CfAR is working on that. But until then, I’ll make do with success stories.
I think the question is a useful one even if it fails to elicit useful answers.
I went from being single to being in a relationship that’s still going 2ish years later but who knows if going to a CFAR thing was at all responsible.
Could a psychological payoff be specific enough as well? Would e.g. “an embellished sense of intellectual superiority” be too vague for the purpose of your question?
That’s the Dunning-Kruger thing, isn’t it? :-P
that’s… not a stupid question.
This may be long for a stupid question… and it’s not really one question… but it seems like a safe first post kind of place! It has just been on my mind a lot the last few months.
I was recently doing a review of my workplace’s management system and used personal life examples to demonstrate why the management system is (/would of been) effective. Instead of convincing anyone else, I convinced myself my life would be better off if I had a personal management system.
I’ve googled high and low and found nothing that I could draw on. The amount of self-help and motivational books I waded through though… that was impressive. I find it particularly interesting that it doesn’t seem present regardless of culture, even for procedure-heavy ones like Japan and Korea (at least in business) or more direct/rigid like Germany. Life just happens and you muddle through.
After putting pen to paper, I realised many “deficiencies” in my own life. I don’t have a records “policy”—my files are on hard drives, NAS’s, couple of clouds and a drive in a bank deposit box. I have no idea how many copies of my tax returns are floating around out there. I used to track expenses, but when I wanted to see my cashflow I realised I had no data for the last 2-3 years. I recently had to run out and buy some cleaning supplies because I ran out—that sums up my inventory system. That’s a major barrier to cooking at home. I’m not sure what I have for emergency supplies either. I certainly don’t plan (schedule or monitor) activities in my life at all. Risk management? I think my household insurance auto-renewed but not 100% sure.
Yet, I’d be considered decently organized among my peers. That seems terrifying.
Life isn’t a project, or a company, but I think the “management system” approach would be beneficial because;
It engages system 2 thinking, resulting in (presumably) better plans
It allows optimization though sharing and iteration (assuming some common approaches develop)
It helps communicating and being held accountable, least for a certain set of relationships
It helps manage change, like the move from portable drives to cloud storage, or changing insurance coverage
It increases transactional memory, where you can put trust into a system to avoid having to keep a mental maps (of files, of money, of contingencies)
It allows outsourcing since the process is relatively well defined, such as to (virtual) assistants or maid services (I believe it’d be a net economic boom)
It can help be pro-active, like staying in touch at regular intervals—both prompting and prioritizing (more to do with how you approach life)
However, my prior is that almost no one does this. The most I’ve seen are individual components—some people run very good household budgets. It just doesn’t exist as an overall framework.
Why? Is it because it doesn’t work and/or isn’t a suitable approach? A lack of definition to “life” to structure around/optimize towards? Is it more emotional, not giving up control and flexibility? Not taught/not socially acceptable? Does System 1 not bother with it, leading to failure?
--
I’ve spent some time working on this but it’s tough and I’m really not sure the effort would be worthwhile. The trigger to take it seriously was a long chain of events that led to a life achievement list. It’s still in brainstorming mode, but getting huge and it seems to me that I need to put a lot more effort into optimizing for it.
Just to be clear, a lot of the answers I have gotten in person have been along the lines of “pick what you want and focus on it”. I think it misses what I’m trying to convey—how do you manage everything you don’t focus on? Why do you do things a certain way? I want to know what to do with my copy of my taxes next time I file. I want to do it because I should and I want to know why I should. The system offers an imperative, built on the foundation of having thought it out and deciding “this is how it should be”.
You might be interested in the book Getting Things Done. It was written before smartphones and cloud syncing calendars but it can easily be adapted to To-Do lists and managing your life in the modern age.
A basic summary is thus: Every action you need to do but haven’t yet done is an open loop in your mind. You have to keep thinking about it until you do it, and close the loop. However, lots of things can’t be done except at specific times and places. You can maintain seperate to-do lists for things that can be done anywhere (Call a friend to schedule a movie, tie your shoe, etc.), and things that need you to be at your desk, at work, or at a grocery store. Storing all the myriad things that you need to do in life in your head is stressful and difficult to successfully accomplish. If you offload this information to contextual to-do lists, you can forget about the open loop and rely on your general system to remind you if it only if you can actually do something about it. This allows you focus on things you’re doing in the moment, without worrying that you’re forgetting a bunch of things you still need to do.
I am working through GTD myself and will post a more extended summary in language different from the book.
Much appreciated, I’ll take a look!
I think this is a large part of it, but it seems like a subset of what I’m thinking about. This is a great answer if someone came up to you and asked “how do you get things done?”, and it is a pretty broad planning approach. Even better, those who use a similar system can talk about their approach, ideally sharing ideas and “best practices”. Even when I googled the book, I got thousands of hits that would help me—tools, blogs, reviews. To use corporate speak, this would be a set of policies, processes and guidance… complete with workflows. It fits perfectly (this comment pending me actually reading the book!).
what I struggle with is that there doesn’t seem to be anything above this. Most companies have some sort of management system that would give context to the process.
To give a corporate example applied to real life, most people don’t really evaluate risk in their own life. Here at LW and such we talk about existential risk, but I’d still guess that very few have disaster supplies or plans for much more likely events (this is from recent talks triggered by http://www.shakeout.org/ ). My wife and I talked casually over dinner about earthquakes and realised this is a non-trivial problem that probably should be taken seriously. Getting home, walking long distances, bridges, what to do if our apartment isn’t structurally sound or flooded, where to meet/wait if communication is impossible, what to do in winter with no heat.
The same applies to a lot of other risks—robbery, fire, financial, accidents, sickness...
Each of those can be dealt with tasks that would be governed by the GTD process, so I think it’s a big part (it is either done or not!), but I feel like there is this missing umbrella that binds these things together. There seems to be a lot of social gains to it as well. Talking about disasters with our friends would lead to some pressure on them to prepare, as well as share how we would do it and possibly find shared solutions.
Example above could be replaced with less extreme examples, like how you store your files/pictures/etc or do your household budget.
I have started working on something like that a while ago.
Interesting! Did you make any further progress? I personally see a great deal of value in this kind of risk identification. A lot of risks are not easily solved (eg: just buy insurance!) or properly quantified (eg: accident insurance protects me from income loss!).
No progress, I dropped the ball because of smooth sailing in my life. Which is exactly why I should prepare for things getting worse.
But seeing as people gave upvotes, there seems to be interest. I might pick the ball up again.
I’ll share my system in case it’s helpful as a reference point.
Mint.com does a great job of tracking expenses, if you primarily use a credit card. (Which you should for the 1% discount on everything, unless you have bad self-control issues with money). It also lets you set budgets, which are fine for rough estimates but it’s strength is in recording all your transactions and I just use Excel for planning out a yearly budget. For tax record keeping, all my pay stubs and tax-deductible donations go in a ‘Fiscal Year 2014-2015’ folder on my computer, which I keep backed up on google drive and on a hard drive every two weeks.
I’ve had great success with google calendar for managing my schedule, since it syncs with my phone and I get alerts 15min before an event happens. (For example, the biweekly backups are on here as an event, as are yearly ‘time to set a budget’ reminders, work schedules, gym visits, etc). A tip if you’re overloaded with work and constantly busy is to block in some relaxation or ‘hang out with friends’ time so that those do not get pushed to the wayside.
For notes and to-do lists, I carry a small notebook and pen in my back pocket, which works well for me. (I use something similar to the getting things done method for the lists).
A lot of household supplies and food are totally fine to get as ‘just in time’ inventory, and they just go on the to-do list when I run out of the stored supplies (toilet paper, etc). The exception of course is a plunger, a fire extinguisher, and a first aid kit, which you should always have on hand. Every weekend I usually go on a grocery/supplies trip, do laundry, and clean the apartment.
For emergency preparedness, it’s not something you have to constantly think about—you could take a day this weekend and figure out how much food and water you’d need for 3 weeks of power outage, go buy that, and forget about it. You might also want a ‘go bag’ in case you need to make a flight quickly.
I also feel like I’m fairly organized compared to my peers, but I don’t actually know what their personal management systems look like. In general though, a lot of life is take-it-as-you-go, and I would not feel comfortable with a system that I can’t afford to totally ignore if I need to.
I have a similar vision to this but since my life ran smoothly for a couple of months, I did not put my thoughts onto paper.
Generally it seems that any organisation system is highly personal as there are many individual kinks to be worked out such that there is almost no way to have a holistic system apply to everyone. Also, the vast majority of people are not interested in these kind of things, I think.
The phrase “run your life / yourself / your family like a business” sometimes pops up but does not take the principle very far.
Also the whole thing is overwhelming. When I tried to do stuff like track my budget I ran into the problem that I would need to type in every damn receipt I got. Or be content with knowing that I spent any given sum at a grocery store, but then I wouldn’t know on what products. When I buy some food on the go I needed to take note. When I get digital receipts the date on the receipt did not match the date the money disappeared from my account. And so on. The whole thing was not to be trusted and instead of helping me it bothered me, so I abandoned it for casually taking a look at my accounts and estimating how long it could last.
This seems very true to me and now seems like the largest factor. That combined with it being a horrendous amount of work, although I believe that may be because there is no foundation to draw on.
I can relate to this very well. I did write down every expense for almost 4 years but it was just too much work. It was never really useful either, which is surprising given everything I’ve read about budgets/etc. It may of helped get my wife and I on the same page, however, so I don’t regret doing it.
I’d argue that this is the reason for having an overall umbrella. Why did we attempt to do this? Because we should? Because we wanted a specific answer? Does it require that kind of data? Most people who run a budget (including old me, before I got lazy!) did it because… we did. I liked the data. But it didn’t feed into anything, didn’t answer any important questions and didn’t seem to influence my behavior.
On the other hand, if I needed to know if I can save enough in time to travel to Europe for 4 weeks… that’s important to know! But it has nothing to do with a traditional budget (just cashflow), which I can do in ~15 minutes a month, if I organised myself to do it. (1)
Back to the overall umbrella/management system—the “why” should sit at the top. I do it because I need to plan for certain things, which cascades down to having a “budget” to a certain level of detail. Instead we seem to do it from the bottom up, resulting in making “perfect systems”. The idea drives it rather than the actual need. There is no “good enough” in that approach. It’s “cool”, not effective. Maybe that’s what I’m trying to define. A management system is meant to be effective, whereas we pick up a lot of good and cool ideas and try them out.
I actually have a story about that. The year was 2006 and I was heavily motivated to change my lifestyle. I’d just moved in with my then GF. We decided that we would track all our expenses (this was a good idea at this stage in the relationship, since we decided to merge finances more than normal), but further breakdown our grocery bills into nutritional groups. I have a nice table of how much I spent on meat, dairy, vegetables, grains, sweets… Talk about a lot of work. Interestingly enough, I got $29.55 in bottle returns. Anyway, I had the data but it did nothing. 50% of my food budget was still eating out. I still ate the same. Great idea, tremendous amount of work, accomplished very little.
(1) It’d be a four component thing—cash float, credit cards, direct withdrawals and paycheck.
Why should there be a general management system? To prevent fires.
Time and time again I meet people that operate constantly on the “kill any fire” mode instead of letting a couple of them burn down by themself and preventing any further fires from happening. This means instead of paying bills when they are so urgent that they are more important than looking for a better insurance right now people should set up a policy of paying bills in given intervals. Or setting up auto-pay.
What an exhausting way to live like that. I don’t care about bills and where to put them. And there are a thousand more issues that I need to have cleared up in my life, like what to wear or where to get my food from, none of which I deeply care about or is my main focus in life. Or in any way my area of expertise. Which makes me wonder in how many ways I live life like that: Putting out fires instead of preventing them in the first place.
Since none of those areas—paying bills, doing taxes, getting food and so on—are my area of expertise or something I deeply care about I am very willing to compromise: Not getting the optimal result in exchange for something extremely easy to use that I do not have to think about. I have never seen something like that.
Nitpick: would have or would’ve, not would of.
I’m a big believer in Agile; professionally I’ve found that minimal management and especially minimal process works best. I use Trello to keep track of things that I need to do at some point in the future, or want to spend some time on (in a manner similar to the Getting Things Done advice, if I’m understanding that correctly), and that’s plenty.
If you do something more complex, remember to reevaluate your processes regularly, and prune any that aren’t pulling their weight.
Look into quantified self stuff, for management tracking ability. some tracking I find useful: Pocketbook RescueTime Fitbit (other activity trackers exist) for sleep tracking, to help estimate time left in the 168 hour week. Keeping lists in an easy to check location. Each year I have a home folder for that year. any project started or worked on in that year has a folder for it. Keep nesting folders limited, sometimes nesting happens, clutter happens too, try not to clutter the home folder for that year. So far this works well for me.
How do you actually get a first job? I haven’t completed my degree, am struggling to live on my government provided student allowance, and don’t have any experience to put on my CV.
Maybe too obvious but: Ask around. Ask your friends, family, acquaintances… Your personal network is a key factor to finding a job (especially if you’re not picky about what job you want).
As for rounding out your CV, create a category called: skills. Skills your probably have: Fluent in MS Office, research (you’re at university right now, no?), out-of-the-box thinking (you’re on LW, after all, you’ll be better than average at this), works well in team (if you did any sort of team sport or online game)...
If you can’t come up with anything for the “skills” category, ask your friends and family. They should be able to help you out.
Thanks, this is helpful. I’ve been hesitant about contacting family and friends for this, but on reflection there’s no real reason for that.
I got my first real job by summing up all the volunteer work and major personal projects I’d ever done and putting them on my resume. It turns out that at least at the entry level, people don’t actually much care if you’ve gotten paid for doing something before—they just want to be able to verify that you know enough not to flail around wasting money for months or years while you learn the basics of process.
(I’m in tech.)
You never sacked groceries in a supermarket during your teen years, as I did?
I can see why you might want to avoid having to do that in today’s economy, however, where you have to invest your time in the most efficient work experiences you can get to position yourself for higher wages quickly.
I have been accepted to App Academy and have been considering it as a faster option to getting high-paying work, but as a younger, international candidate I’d have to pay for flights as well as a US$5000 deposit. It’s something I’d even be willing to borrow money for, given my waning motivation for university, but without an income or much current earning potential I don’t know if I could get a loan for it. And I couldn’t earn enough to go to the round I’ve been accepted to in time.
Anecdotally someone close to me did one of those and it was a quick way to burn thousands of dollars.
I tried to dissuade them, but end the end they came back with less knowledge than I did of the subject, and all I did was follow some youtube tutorials and look at stack overflow to create a couple learning apps for android.
That assumes the goal of App Academy is building knowledge. I don’t think it is. The goal is getting a well paying job. Somnicule is likely smart and has knowledge of programming but he still doesn’t know how to get a job despite not having formal credentials.
Paying thousands of dollar for going to a state without a job to a state with a 50000k job is okay.
Anecdotally two people close to me did similar crash camps on coding and ended up with high-paying coding jobs despite having no experience in software development and degrees in unrelated fields. They seemed to do well, but since this isn’t a controlled experiment I can’t say whether the jobs they got are jobs they’d have not been able to get if they just studied on their own for a while.
The general incentives for this one seem better than average, (they generally take a cut of your first year’s income rather than an upfront fee, high average income aster etc.) but I get a different, fixed payment contract since I’m from NZ. It’s tempting, but higher risk than if I were from the US, especially since without a completed bachelors it’d be much harder to get a visa, and work here or in Australia won’t pay nearly as well.
I’m really losing motivation at university and have my own mental health issues, so the prospect of something like that has been somewhat comforting as an escape route.
This has turned into “somnicule’s personal problems” rather than the actual point of the thread, so I’ll leave it there.
Your school might have useful resources. If there is a career center, go there and see what kind of resources and help are available. There could be a student internship program, student job boards, career fairs, etc. Professors sometimes have work opportunities as well (they might announce these, or you may have to ask).
Informational interviews let you get an insider perspective and some connections and pointers. I’ve known people who got offers this way. People love giving advice. Also consider volunteering in your area of expertise to both gain experience and build your networks.
Were you involved in any activities in high school? (Sports, clubs, volunteering, etc.) Are there any interesting projects you’ve completed on your own? (E.g. do you have a blog?) Do you have any hobbies? Personal qualities to highlight? How do you spend your time?
Rowing for one season, competed in a mathematical modeling competition and didn’t win but got a special mention from the judges, traveled to Ecuador for a month as a volunteer project, did some maths and physics tutoring.
Personal projects are pretty limited, but I’ve got a Django play money prediction market site that I could get running again in a weekend.
Beyond that, there’s nothing that leaps to mind.
Those all sound pretty good to me. You might even come across as overqualified for menial work. If you want to reach a little bit, have you thought about trying to get freelance software development work? Getting paid to improve skills is always nice right? If you can’t find anyone in real life ODesk.com is an option… if you build up a reputation on that site, you’ll be able to work remotely from anywhere in the world. A degreeless friend of mine was making over $50 an hour after over-delivering for ODesk clients for 6-12 months.
I found my first job using this tool http://mappedinny.com/. It’s specific to tech jobs in New York City. I have no idea how anyone ever gets a job in other places/sectors.
You said you did some programming. Having your Django play money prediction market on github for people to see that you can program is likely valuable.
You can go to meetup.com and search for for local meetups where programmers go. Then you go there and tell everyone that you speak with that you are searching for a job as a programmer.
I’m not sure if it’s the same where you live, but I found using the phone much more effective than sending applications all over the place. I first sent the application, then called. A good personal first impression, if you manage to squeeze it in, seems to beat a well written application any time.
I probably need to do some exposure therapy with phone calls, but it’s definitely worthwhile doing that.
Few of the people were absolute dicks when I called them and told me just send the application, but then I figured I probably didn’t want to work for them anyway. So don’t give up if that happens.
What skills do you have?
[Meta]
While writing my recent post, I was thinking that it would be great if there was a summary of all the best answers received from widely shared ‘stupid questions’ in Stupid Questions Threads. I’d call this post ‘The Best Answers to Stupid Questions’, or ‘Frequently Asked Stupid Questions, and Answers’, and it would be analogous to how NancyLeibovitz summarized conclusions on procedural knowledge gaps. I might include responses from this thread in such a post. I have some questions about how I/we might gauge the best (answers to) stupid questions, aside from upvotes, among others. Would I only include questions/answers that seemed most generalizable?
Also, if I wrote this, how much should I care about privacy? I notice I don’t actually have a common sense in this regard, so I’m asking honestly.
If someone has already posed a question in a Stupid Questions thread using their account, is it fair to assume that it’s fine to share it more widely by profiling it specifically in a post? Is this a non-issue?
If it’s not fair to assume so, how should I go about seeking consent to include the Q/A in the post? What’s reasonable?
The phrase “Non-Snappy Answers to Stupid Questions” seems to be nearly unique despite being a straightforward perturbation of a pop culture reference.
I’d call it “Smart Answers to Stupid Questions Repository”.
To your privacy concern, I’d say it’s a non-issue. If you post in a thread like this, the assumption is that you’re putting your post out in the open for The Entire World to read, link to, quote, rebut, etcetera (largely because this is, in fact, the case.)
Privacy can sometimes be a tricky issue. I don’t think this is here a problem as long as you stay within LW.
As far as copyright goes content on LW get’s licensed under a Creative Commons license.
I’m a year from completing a PhD in genomic science. I am now completely disillusioned with my field, and indeed professional life in general. I entered with ambition, and have been cleansed of it. I didn’t quit early on because I lost all my self esteem and assumed the problem lay with me, and that I would be equally unhappy elsewhere. I’m now almost sure this is wrong, but I only have about a year to go, and no idea what to do next, and am fairly well paid, so quitting seems imprudent.
I have basic statistical and coding skills (whose usefullness in the real world I cannot assess) and honestly no idea what i want to do with my life. I cannot imagine enjoying a job anymore, but intellectually, I’m aware this is probably just a result of my present, rather toxic environment. I would like something socially valuable and/or lucrative, but will settle for something which has normal work hours and doesn’t drain all the life out of me. My definition of socially valuable aligns well with that of the LW community, though I place much lower credence on a near term Singularity than most here, I think.
I imagine this is a common ish situation, and advice to me would be generally relevant.
1) Tell me if this is the wrong place for this kind of moaning 2) Advice? Sources thereof? Finding a job? Overcoming apathy? 3) How to assess the usefullness of ones skills? Low hanging ways of improving them?
Coding + stats skills + some biology = the world is your oyster (pharma, research labs/institutes, postdocing if you aren’t sick of academia yet, bio startups if you feel you can get invested in something again, etc). I am sorry you had a toxic experience in graduate school. I know this does not help, but I can tell you these are very common, especially in your field.
Just wanted to say good job for realizing the problem was probably with this job, not with you. You may find it helpful, motivationally, to talk to friends/acquaintances and ask them what they like best about their job, so that “a job that doesn’t make you miserable” feels more achievable and you feel more hopeful/driven about pursuing it.
I say this b/c a friend of mine was miserable at his job, and I realized how miserable when I told one funny story about my workplace, and he wondered if he could work there, specifically, because I didn’t seem unhappy. It was clear he didn’t alieve that you could be less unhappy than he was (throwing up every day) at most jobs. In that state of mind, it’s hard to get excited about applying!
Also, try to do nice things for yourself, generally. Being as unhappy as you sound in your job is kind of like having walking pneumonia—it’s a big energy drain on everything else you’re doing. You may find it helpful to ask “What would I do to be kind to a friend who had walking pneumonia?” and then do those things for yourself. You’ll wind up in a new job eventually, but, in the interim, you may want to make sure you treat the symptoms, as well as the root problem.
Ask your professors and school career services and fellow students what are some good non-academia career options. Most people who get a PhD in your field aren’t going into academia, so where are they going?
Research Labs and Pharma/other medical companies (which I imagine are your major prospective employers), or some bio-oriented software companies will have wildly varying work environments. Do your best to choose one that seems to have a friendly, lower-pressure environment. Cultivate your life outside of work, and make friends inside. Save some money.
There’s good odds that the professional world will treat you better and demand less than your PhD program.
Then figure out where you want to go from there.
As a fellow biology PhD student with a somewhat different experience (with you on the work hours, but I’m really glad I’m here), I hope you don’t mind me asking what has been so draining about the experience?
There are a lot of things wrong, I don’t know which of them is the most important...
1)I have zero control over my own work. I am working frantically all the time to complete analyses requested by other people, most of which turn out to be useless or ill thought out. People generally don’t understand programming and stats enough to know how long things should take.
2)My boss is widely regarded as a bit of a tyrant. I have a powerful aversion to interacting with her in any way, and she has extremely poor communication skills. Our relationship is terrible. I think this is my fault as well, I work a lot but seem to get little done, and whenever I’m around her I feel a crushing sense of guilt and insecurity (this is all pretty melodramatic and childish—any my issue more than hers, but there it is).
3)The culture in my lab is about producing papers, not discovering things. I have the impression that almost none really give a crap about what we’re studying.
4)I’m in a small town in a foreign country which I hate.
5)I have no belief in the value of the work we do. Nor do many of the smart people I’ve talked to. Many of these smart people have quit the lab recently. I was attracted to the lab by work that I didn’t have skills to fully understand at the time. Knowing what i know now, I’d never have come. Our results take the form of vague correlations, and have no practical relevance to anyone.
6) Because I transferred into statistics from wet work a year into my PhD, my boss has been reluctant to give me any real responsibility. she’s given me these vague, ill thought out side projects (other people’s opinions, not my own), which has yielded dead end after dead end.
7) I generally have the impression that I am bad at my job. I have extreme difficultly focusing on my work and I make a lot of embarrassing bugs and errors. Part of this may be due to diagnosed attention difficulties, but I think most of it is just a total lack of interest. Forcing yourself to do something well when you don’t care about it is difficult.
I think overall it’s not so much the negative stuff as it is the total lack of positive reinforcement. Aside from a few brief false alarms, I have literally never had any success. I know there must be parts of the job that other people find rewarding, but i simply haven’t experienced them.
Get in touch with them. They’ll understand your frustration, and might sympathize enough to give you better help than we can.
If you need a plan to improve your situation you might consider Athol Kay for advice. He is not uncontroversial (see this thread), but he provides you with a clear tested plan to deal with your situation (the book is not only about person-relationships but also written if you are attached to a job).
From what I’ve seen people who have been wasting away in academia and described their position much like yours have become WAY happier when transitioning to the private sector.
A response from someone, who is happy in research of biosciences:
The personality of your boss is very important. I always payed a lot of attention to my emotional responses to potential bosses during interviews. Privately asking for the opinions of their subordinates can also help. As you write below, your boss “is widely regarded as a bit of a tyrant”. There must be a way to find out in advance next time. For me, the meaningful topic is not enough to save the job, if the boss is a toxic person. On the other hand, the less sexy topic can become meaningful, if the boss lets you follow your curiosity and your ideas (within the constraints of the budget). The working hours are also boss-dependent.
However, I must make one point—I am not saving the world or hitting for the Nobel price. My goal is to play and be somewhat useful to society while doing that. None of my papers is a dramatic thing, it rather feels like a tiny drop into the ocean of knowledge. Some people at lesswrong say: if you do not perceive your topic as the most important issue of the world, change the topic. This is the test I would not pass. I just hope to be a little helpful.
Yes, ignoring this advice was in retrospect very foolish. I badly underestimated how important the supervisor/student relationship is. I’m going to be a lot more careful next time.
I don’t know what the academic job market is like in genomic science, but if you would take pleasure in teaching you might enjoy working at a liberal arts college, which would likely be a very different environment then being in a PhD program.
Teaching is something I would love to do, but I was given to understand that you basically have to do research nowadays, due to the glut of academics.
It depends on the field and the school. If you don’t care about status, you will have a much easier time finding an academic job where teaching undergrads is considered an important part of your job.
What’s a good place online to ask questions about sex and sexual problems? LessWrong feels a little too public to me...
/r/sex is pretty good
::googles::
Yeah, it seems like that is the right kind of place. I think the standard links have answers to most of my questions anyway; my particular problem seems common enough.
we already know you are sick, feel free to ask!
I don’t want to embarrass my girlfriend.
Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.
It’s a little bit complicated; I’m a night owl and my girlfriend has insomnia, so we spend a lot of evenings at home together. My sleep schedule is seriously out of sync with the rest of the world right now. A typical day might consist of getting out of bed around 4 PM or so, lounging around a bit, taking my mom out to “dinner” some time around 9-10 PM, go home, visiting with my girlfriend at her place, going back home at around 2AM and start putting my mom to bed, finally finish putting my mom to bed around 4 AM, playing video games until 6 AM, and then falling asleep. (My mom has multiple sclerosis and is in a wheelchair; she needs a lot of help and doing anything with her takes a very long time.)
xkcd forums.
You can make another account or just ask as Username, but I’m not sure LW is.. um… err… competent to answer questions about sex X-D
Empty Closets.
::googles::
My girlfriend and I are both cisgender and heterosexual, so that particular site won’t be too helpful for us...
/r/RedPill
Based on what I’ve heard of the “RedPill” belief system, I wouldn’t guess that it would be the place to ask about things like these...
How do I stop being a hipster? I saw Bryan Caplan advising his readers to read Scott Alexander and my first reaction was “Oh no, a well-known blog is recommending people read my favorite little blog. Now more people will read it and I won’t be as special.” I know this feeling is irrational, but how can I overcome it?
Whether this feeling is irrational depends on what causes it. It makes sense to worry about a community you like becoming popular, since it means that an increasing number of people would join it, potentially reducing its quality.
I don’t think that’s what caused my angst, I think I was worried about becoming less special because more people were reading my favorite blog.
I think you want to be special—that’s an entirely legit and useful desire. The part that needs adjusting is deriving your specialness from obscurity of your interests.
If the only thing that makes you special is your choice of blogs that you read, you aren’t very special.
It’s like people who think that just because they change a few superficial things about their clothing and violate societal clothing norms, they are rebels. They are not. That kind of being a rebel feels a bit pathetic to me. Being a rebel actually means doing something that has an effect. That shakes up society. Don’t optimize for superficial appearance. Dare to go deeper.
If you go deep than superficial issues such as how many people read your favorite blog won’t really concern you anymore.
You should nurture that feeling if it drives you to push ahead to the next level of excellence that is beyond what is common today.
If it means you are just trying to be different for the sake of social signaling, then it is not quite so useful a feeling.
Tentative advice: Try exploring what you actually like and dislike, and distinguishing that from what you expect other people will think of your tastes. Also, you may need both privacy and thinking about how other people don’t necessarily know enough about you to judge you.
Index funds have been recommended on LW before. I have a hard time understanding how it would work investing in one, though. Do you actually own the separate stocks on the index of the index fund, or do you technically own something else? Where does the dividend money go?
(I’d be remiss if I didn’t link this Mr. Money Mustache post on index funds that explains why they are a good idea)
To buy an index fund, you buy shares of a mutual fund. That mutual fund invests in every stock in the chosen index, balanced based on whatever criteria they choose. Each share of the mutual fund is worth a portion of the underlying investment. At no point do you own separate stocks—you own shares of the fund, instead.
Toy example: You have an index fund that invests in every stock listed on the New York Stock Exchange. The fund invests in $1,000,000 of stock split evenly among every stock on the NYSE, then issues a thousand shares of the fund itself. You buy one share. Your share is worth $1,000. You can sell your shares back to the fund and they will give you $1,000. Over the next year, some stocks go up and some stocks go down. The fund doesn’t buy any more stock or sell any more shares. On average, the nominal value of the NYSE will go up by about 7%. The fund now owns $1,070,000 of stocks. Your one share is now worth $1,070.
The dividends go wherever you want them to. The one share of a thousand you bought above entitles you to 1/1000 of the dividends for the underlying stocks in the fund’s entire investment. If you’re smart, they go to buy more shares of the fund because compound interest will make you rich. You can have them disbursed to you as money you can exchange for good and services, though.
Investing in an index fund is very easy. You will pay by direct withdrawal from a bank account, so you will have to do something to confirm you own the account, but other than that it’s like buying anything else online.
Index funds cover costs—which are low, because buying more stock and re-balancing existing stock can be done by a not-that-sophisticated computer program—by charging you a small percentage of your investment. This is reflected by your shares (and dividends) not being worth quite 100% of the fund’s value. Index funds are good because they have a very low expense ratio. Many normal mutual funds charge upwards of 1% annually. A good index fund can charge about 0.20%-0.05%. That means you pay your fund about $20 for the privilege of making you about $700, every year.
Opinion time: I own shares in index funds. They are amazing. For a few hours work setting up an automatic transfer and filling out paperwork, I am slowly getting rich. I don’t need the money any time this decade, so even if the market crashes tomorrow in a 2008-level event, overall the occasional 1990s-style rises cancel that out, leaving real growth at about 5% assuming you use any dividends to purchase more shares.
I will let you skip the next part of this process and recommend a specific fund: The Vanguard Total Stock Market Index, VTSMX. It invests in every stock listed on the NYSE and NASDAQ. If you have $10k invested in it, the expense ratio is a super-low 0.05, and American stocks are very broad and exposed to world conditions as a whole (this is good—you want to spread out your portfolio as much as possible to reduce risk). Go to vanguard.com , you can figure it out online.
I think I could talk about the minutiae of investing all day. It’s fascinating. I should write that post about investing and the Singularity one day.
The key is predicting what will happen to interest rates.
How would you estimate the probability that the post-Singularity world would consider pre-Singularity property to be too silly to be worth bothering with?
The most likely outcome is that pre-singularity property rights would indeed be meaningless post-singularity because (1) we are all dead, (2) wealth is distributed independent of pre-singularity rights, (3) scarcity has been abolished (meaning we have found a way of creating new free energy), or (4) the world is weird.
The Fermi paradox causes me to give higher weight to (1), (3) and (4).
Shouldn’t outcome 2 be given higher weight on account of having actually happened before? Reallocation of wealth seems to be a pretty common outcome of shifts in power.
Yes
Another investing question: if I already have some stocks that were given to me as a gift, am I better off selling them and putting the funds in an index, or just holding them?
Additional info: I already have a well funded index fund and a retirement account, the stock value would be around 10% of their (combined) value. I’ve owned the stocks for 10+ years.
As Lumifer said, if you sell stocks (and they’re up) you pay taxes on the capital gains—the difference between the price of the stock when you bought it and the price now. If the price now is lower, you get a tax credit for the losses, up to a certain point. Capital gains taxes tend to be lower than regular taxes (in America, at least). Selling shares of an index fund works the same way, where you pay taxes only on the gains, so selling stock to buy what is essentially more stock is pretty much a wash—you don’t pay more taxes overall, you just pay them now instead of later. I’m not sure whether being a gift affects the taxes, or what your basis is for capital gains. Investopedia might know, or ask an accountant.
Pretty much the choice of whether to sell the stock and buy more shares of the index fund is like any other choice in investment: which will make you more money? To simplify the math, imagine you sold all the shares now and paid taxes, so you had $X and could invest that in stocks or an index fund. Keep in mind the status quo bias—it is unlikely you would invest in this specific stock if you had $X to invest, and you should only keep the stock if that were the case (tax issues exempted—you’ll have to do the math yourself).
This is basically a tax issue. Selling the stocks would be a tax event so you need to calculate whether paying taxes now (instead of later) will be worth it.
Thanks for the detailed response. The link was very good, too.
There are two typical ways to invest in an index fund, plus one way that isn’t.
Buy a mutual fund that mimics the index you want to buy. You technically own shares in the mutual fund, which is an undivided right to a tiny percent of the whole pool. To get your money out, you have to redeem your shares, which happens at the fair market value. (It used to be that redemptions happened at fair market value at market closing price; I don’t know if that is still true.) To fund redemptions, the fund has to keep some cash on hand, so some small percent of your money isn’t actually invested. If you invest in mutual funds in a taxable account, the churn in fund holders’ redemptions create trades that cause capital gains, which creates taxable income for the fund as a whole. It will be a small percent, but it will still be there (on top of taxation of any dividends). This is because the legal model applied to your investment is like you are a partner in a partnership, where you get an allocation of profits and losses that are separate from your receipt of cash.
Buy an exchange traded fund that mimics the index you want to buy. These are still mutual funds in operation. The main difference here is that the shares in the fund are themselves tradable. Theoretically, I think this means that the fund does not have to redeem as much, so it can hold much less cash. It would do redemptions if people were bailing out of the market generally, so that sellers outnumber buyers. But the issuer can essentially make money by arbitraging any difference, which means effectively redemptions pay for themselves. Being exchange traded means that the mutual funds have to make some additional SEC filings, but these have become routine once EFT’s became popular, and the costs are spread over a truly vast number of people. The other main difference is that you only pay taxes on capital gains when you sell your ETF shares. That is because the legal model treats an ETF like an investment in a corporation, where the corporation’s profits and losses are not attributed to you, and you only get something when you get cash.
You can also buy all the shares yourself, which is something called a “synthetic” fund sometimes in rarefied circles. It eliminates all of the potential capital gains taxes until you sell the underlying assets, but it means that you have to buy a set of shares that would be really expensive and only comes in oddly sized chunks. For example, if you want to buy the stocks in the Dow Jones, that is 30 different stocks, any you would have to buy one of each. That might cost you $1,531.72. Who wants to buy investments at a price of $1,531.72 per unit? What do you do if you have $1,500 or $1,600?That is why ETF’s are so attractive. They get all of the convenience of the mutual fund, the capital gains tax treatment of owning shares directly. Index funds, whether regular mutual funds or exchange traded funds, also have the benefit of spreading fixed costs over huge numbers of people, so the expenses are usually very low.
Exchange traded funds are actually a little more clever than that. From the fund’s perspective, it doesn’t redeem fund shares for cash. The ETF has a number of “authorized participants” which are banks that can create new shares of the ETF. To create new shares of the ETF, they purchase the underlying shares in large quantities (called a “creation unit”) and provide them to the ETF, which then gives back the authorized participant the matching number of ETF shares. So for an S&P 500 ETF, the bank would first purchase all of the underlying shares of the S&P500 in the right quantity (generally a very large quantity) and provide them to the ETF, and the ETF would hand them X number of ETF shares that the bank can now sell. The banks make money through arbitrage, by buying and selling creation units when they are out of alignment with the price of the ETF.
A synthetic fund is slightly different from your description. A synthetic index is when you use derivatives to replicate the performance of an index fund. So rather than taking your $100 million charitable endowment and invest it in an index, you take your $100 million dollars and invest it in treasury securities, then enter into a swap contract that will replicate the perfomance of the index. Future payments go into your fund, and future shortfalls are removed from your fund. It can also be done with futures and the like.
Anyone know how much this is applicable to the UK?
It is very applicable to the UK. sixes_and_sevens document is a good start, but note that there is a whole world of online platforms now (Hargreaves Lansdown, Bestinvest, etc) which can be much more convenient and give you more control.
Note that there is SDRT of 0.5% on UK equities, which you will likely pay as a secret hidden tax on your index fund—the more ethical companies, such as Vanguard, make this clear—so you may wish to choose your index fund to avoid this pernicious tax. If you work in the UK, you are probably implicitly overexposed to the UK economy anyway, so this is an argument to track a global, non-UK stock index.
Most of the information on index funds that has been provided in this thread, and in links from this thread, is applicable to the UK. I suspect all the information you’re likely to retain, and need, is applicable.
Here is a quick-and-dirty document I threw together for London LWers looking to invest in index funds but not knowing where to start. There are better guides out there but they are necessarily longer. Googling “UK index tracker” will get you quite far.
Interviewers encourage interviewees to ask questions. And people holding presentations are asking for questions. Also people in general seem to like it if you ask questions about them. But not all of them. How do I choose good questions to ask in an interview, after a talk or with a person, especially when the interview answered all of my obvious question, as did the talk?
I think it depends on your goals.
Have you already decided you want the job? “Based on this interview and the information I have given you, what do you see as my strengths compared to other candidates?” Same question for weaknesses but think about not highlighting them unless you are pretty sure you can defeat them by addressing them. One way to ask about them would be to ask something more leading, like “what are you unsure about, where you would like to hear more from me about the topic?”
Are you really just thinking about the job, where you already have adequate employment? Then ask about what matters to you in a job.
Listening to a talk for fun or learning? I think contextualized questions about application of ideas are good to bring the abstract down to the concrete and sometime reveal more. So, you might start a question with, “I am a ___ and in that work I see a lot of ___ . How do you think your idea of ___ applies to a ___ where ___ is usually the case?” I think the key there is not to focus on the introduction. Most people like to talk about themselves, so it is easy to fall into that trap and forget about asking a question.
I had a job interview earlier today for a job that I’ve already decided that I want. Having recently read this post, I tried the “what do you see as my strengths” question in the field. I don’t think it went well. It seemed like a very thinly veiled attempt at making them say positive things about me and I suspect the interviewing panel was smart enough to realize this. The result, to my perception, was that I came off as the kind of person who would attempt to deploy cheap petty psychological tricks. Rather than calling me on this, the interviewers played along and said some nice things about me, but I feel like it left a bad impression on them that I asked them to do so in such a transparent way.
I am posting to warn others to consider the possibility of this outcome before deploying this tactic.
UPDATE (7 November 2014) - I was offered the job today. Not sure what effect, if any, this interview tactic had, but I feel like I should at least disclose this result for posterity.
Definitely a good thing to look up beforehand. I might recommend the Manager Tools interviewing series. The key point is that even this point of the interview is NOT about you learning anything. The entire interview is you selling yourself. There’s a view about that “you’re interviewing them too,” and you should hope you’re competing against other candidates who think that.
Anyway, bit of a tangent as that wasn’t specifically about just job interviews, but nonetheless the interviewing series addresses that insanely well. Probably want a link I think it’s www.manager-tools.com.
The “you’re interviewing them too” line is absolutely true if you are in a competitive market and are not desperate for a job. If you are unemployed, your best strategy is to get any job in your field, work there for a few months, then start hunting for another job. If you have a job and skills the market values (and thus expect to be able to get multiple job offers in the course of a few months), you can afford to be selective. This means you should not take a job offer unless it’s an improvement from your last job, and it’s enough of an improvement that you it’s worth it to stop searching. There is a post somewhere on LessWrong about the decision theory on how long you should look on an open ended issue like this, I believe with marriage as the subject, but “don’t take a job that sounds like it would grind your soul to dust” is a good starting point, as is “never take a pay cut, or a non-significant pay increase”. Switching jobs is a pain, and you can’t do it too often.
Getting back to the point, in an interview you should ask three main kinds of questions: questions that make you seem smart, questions that you show you were paying attentions, and questions that you actually want to know the answer to. If you can do two or all three at once, great. A good stock question is “Can you walk me through what a typical day in this position is like?”, because it’s rarely answered earlier and it’s good to know. It’s amazing how often people will talk about a job in generalities and not say, e.g., whether you are going to be sitting in a chair pressing buttons all day or whether you’re going to be traveling, attending meetings, washing beakers, whatever. “How big a team will I be working with?” is another one, because again it sounds like you care about the particulars of the job, which you should if you’re going to be working there for months or years. You should be able to get two or three relevant questions in your specialty too to trot out.
Finally, don’t wait until the end of an interview to ask questions. It’s best if you have a conversation, not a monologue. Don’t interrupt, but if there’s a break in the interview ask about something that you want to know about. You might find you have no questions left at the end—just tell the truth, that you already asked everything you wanted to know.
Source: I have a job. I also know quite a few people who are part of the process from the employer end.
Is continuing to write the story I started, that’s turned into a novel, a good use of my time, or am I suffering a version of the sunk cost fallacy, and I would do better abandoning the story and starting something new (and presumably more properly rationalist), now that I’ve gotten my writing muscles properly exercised and in working order?
If you develop a reputation for not finishing your stories, men who value closure are going to be reluctant to read them. Consider that you have already abandoned Myou’ve Gotta be Kidding Me.
Hm, now that you mention it, I think abandoning Myou’ve Gotta be Kidding Me was quite a good idea.
I’m curious; would you care to expand on your reasoning here?
To be blunt but vague, I thought it was bad, and I think your S.I. novel is fun to read and probably stretches your writing muscles more.
It’s probably good to not trapped by the themes and plot arcs of the past if it turns out they’re not so hot.
True. (Which is part of why I’m asking in Stupid Questions instead of Open Thread.)
As a possibly relevant detail, if I do end up unable to keep my motivation going, I have a backup plan to at least quickly finish up the current story, if not tie up as many of the dangling plot threads as I currently want to; specifically because of my experience with Myou’ve.
I think you should finish the story before you lose motivation. Writing without motivation would be really bad; I certainly don’t recommend it. But developing a habit of not finishing things could also be dangerous in long term.
Perhaps you could treat “finishing the story” as a separate project. Imagine that someone else wrote the story, then the original author has died, and you have inhereted this task. You want to finish the story meaningfully, without reducing its quality, but you also don’t want to prolong it any more than necessary.
:) In a nutshell, that’s the trick I had in mind as an alternate source of motivation, if I can’t keep my current motivation levels for writing the story up for a few more months.
Novel writing is almost never a good use of one’s time, as can be seen from how much one is usually paid for it. If you’re enjoying it then carry on; if not, don’t.
I don’t think it’s the sunk cost fallacy—if you’re having fun, you should keep going. Your best work is still in your future.
Unsolicited silly advice: you’re probably going to have to revise more in that future.
I have to say that I still am—I just managed to turn “Punch and Judy” of all things into a significant plot element. :)
I certainly hope so.
By ‘revise’, do you mean in the British sense of studying to get my background details straight, or the American sense of rewriting to improve previously existing text?
I am very afraid of bugs. I have to psyche myself up to get close enough to a bug to smash it. I have, on more than one occasion, decided to go to work without showering because there was a spider in the bathroom. Ants and flies don’t bother me for whatever reason, but spiders/moths/beetles/grasshoppers/silverfish and almost anything else bug-like are very disconcerting to look at or be near.
This only rarely interferes with my life, but it is very frustrating. I’m not sure if I’m looking for a way to remove the irrational fear, or a coping strategy. “Keep gloves and lots of RAID around the house” is my current best idea.
Exposure worked great for me. I used to be scared of all sizes of cockroaches. After living in an apartment which was absolutely filled with small German cockroaches for a couple of years, I have lost all fear of roaches that size and smaller, while still retaining fear of the larger American cockroaches. I predict that if I were to move into Joe’s apartment, my fear of large roaches would likewise be cured in due time.
I have also slowly lost my fear of spiders (and other long-legged critters such as crane flies) by virtue of encountering and examining them from ever closer distances. My usual response to seeing a spider in the bathroom these days is to grab a generous amount of toilet paper and squash the disgusting thing, whereas a few years ago they would forced me to flee the scene. I like to think of it as having leveled up to the point where I can now defeat my house’s random encounters.
Deliberate self-directed exposure therapy reduced my fear of certain types of bugs. It isn’t pleasant, but I think you can be quite optimistic about the odds of success.
There actually is a way! Exposure therapy works on phobias and apparently something like 90 percent of people have significantly reduced fear even 4 years after the therapy. There’s even an app for it!
I used to be creeped out by house centipedes, but I decided to get along with them after reading that they are generally harmless to humans and useful to have around because they kill all sorts of other household pests.
I think just remembering that they are a good thing and thinking of them as being on my team was helpful. I also gave cool names to the ones living in my basement (e.g. Zeus, Odin, Xerxes) and talked to them e.g. “Hi, centipedes. Keep up the good work, but please do try to stay away from me during the day, and remember our deal: you can live here, but my species has an ingrained fear of you guys, so if you drop down onto me from the ceiling or something I’m probably going to instinctively smash you.”
I keep wrapping paper around, both for wrapping presents and to give me a tool for killing bugs at a can’t-jump-on-me-from-here distance.
Hi everyone, I have a question related to the possibility that we live in an infinite universe, and the ethical implications that follow. I’ve been thinking about this a lot lately, and I’ve looked over Nick Bostrom’s paper on infinite ethics which, if I understand it correctly, suggests that in an infinite universe containing infinite positive value (good) and infinite negative value (evil), it appears to be the case that nothing we do can ever really matter ethically because all we can do is a finite amount of good or evil (which has no impact on an infinite value).
But I have seen discussions of multiverse ethics on Less Wrong where commenters are seemingly talking as if they are able to act in an ethically meaningful way in an infinite cosmos, talking about something referred to as their “measure”, and of increasing their measure. I’m afraid I do not understand at all what they are talking about.
Can someone please explain in layman’s terms what this sort of talk is all about (sometimes the discourse at Less Wrong is over my head, so as simply and clearly as possible please!). What is “your measure” and how can it be that it matters if the amount of positive value and negative value in the universe is infinite? Sorry if I am misunderstanding something basic and this question is stupid. Thanks!
Not sure if infinities really exist. Pretty sure it’s important to not be a dick.
You don’t have to be sure that infinities exist to explore the possibility. Have you looked at Nick Bostrom’s paper? Do you think Bostrom was wasting his time?
In math, “measure” is a way of assigning a “volume” (or length, area, probability) to infinite sets. The “cardinality” of the set of numbers between 0 and 1 is the same infinity as that of the set of numbers between 0 and 2, but the standard “measure” of the latter set is twice as big. You can sum up certain types of functions defined on a continuum by “integrating” those functions values using the appropriate measure.
If there turns out to be a continuum of possible universes created by, say, a particle decay, then there’s also a natural physical measure that corresponds to the probabilities we observe; the set of universes in which the particle decays before 1 half-life would be “twice as big” in some sense as the set of universes created in which the decay occurs between one half-life and two half-lifes. If someone offers to do something for you if-and-only-if a particle decays before one half life elapses, you should figure out the expected utility of a 50-50 bet, even if the reality might be that your decision is affecting two different infinities of subsequent universes.
There’s a lot I’m glossing over and/or don’t understand myself here (why is the probability measure the only ethical measure? lots of different-but-self-consistent measures can always be mathematically well-defined) but hopefully that at least explains the vocabulary a bit.
This is an extremely clear explanation of something I hadn’t even realized I didn’t understand. Thank you for writing it.
The probability measure is the one that’s conserved by physical time-evolution of the system, no? It would be a bit weird to have an ethical system where universe A was worth the same as universe 1 and then a few minutes later it was only worth half as much.
Bostrom suggests the “domain rule” to discount unobservable infinities. Seems pretty reasonable.
If the universe is infinite, then there are infinitely many copies of me, following the same algorithm, so my decisions create infinite amounts of good or evil (through my copies which decide the same way).
Or, to see it from another angle, if the universe is literally infinite, then it is more or less infinitely repetitive. So let’s take a part of universe containing a copy of approximately everything, and treat this part as a finite universe, which is just replicated infinitely many times.
Your “measure” is the proportion of your copies to the infinite universe.
Does this follow? The set of computable functions is infinite, but has no duplicate elements.
The measure of simple computable functions is probably larger than the measure of complex computable functions and I probably belong to the simpler end of computable functions.
Ok, I think that helps a bit. But if there are infinite copies, how can you talk of proportions? It’s not like anything you do decreases or increases the number of copies, right?
It sounds much like a defensive argument for evil behavior. Obviously Good and Evil are just abstractions which only humans can measure, and at the level of the universe these things don’t exist and don’t matter
My first thought is that that kind of argument is exactly the kind that even a highly rational person would be wise to respond to with epistemic learned helplessness. It involves putting infinities into a calculation about real-world decisions, and it’s an argument that all ethical actions are meaningless.
Also, if one values amount of good accomplished instead of amount of good in the universe, the infinite universe doesn’t change anything.
That said, I don’t think the multiverse interpretation implies an infinite number of universes unless the universe is also infinite in space or time, so people discussing the multiverse may not believe in an infinite universe.
Suppose you have a suit of events A1, A2, A3 … An, that correlate with event B with varying strength. You want to calculate the aposteriory probability of B using the Bayes theorem, and obtain a set of numbers. How do you decide which one is the correct one? Or you just must do these calculations several times and then pick the best indicator? Sorry, I suspect there is a simple answer...
Are you trying to find the probability of B given all n events, that is, Pr[B|A1, A2, …, An]? In that case, none of the calculations Pr[B|A1], Pr[B|A2], …, Pr[B|An] are useful, necessarily. In fact, even if each Ai individually makes B more likely, together they may make B less likely.
(For example, suppose we are rolling a fair 6-sided die, and take A1 = “We get 1 or 3”, A2 = “We get 2 or 3″, and B = “We get 1 or 2”. Then Pr[B] = 1⁄3 before we condition, since 2 out of 6 outcomes satisfy B. If we learn either A1 or A2, then Pr[B|A1] = Pr[B|A2] = 1⁄2, since 1 out of the remaining 2 outcomes satisfies B. However, if we learn both A1 and A2, then Pr[B|A1,A2] = 0, because then we know that the outcome must be 3.)
If this is not what you mean, please elaborate.
Thank you. I want to pick the exact A that would point me to B. But I apologize, I should have labeled them A (and -A), C (and -C), D (and -D)..., because they are actually different things that happen simultaneously with B. There might be (should be, even) interdependence between them (at least most of them), so I won’t use it as a very reliable indicator. Just a way to quickly estimate what I expect to see.
Are you, then, trying to find which event gives you the most information about whether or not B occurred?
Yes, that’s it. It is just that some of the tests are much more expensive, to the point that I won’t be able to do them routinely,but others wwhich are quick and easy to perform might not give me the necessary information.
The value of a test A for learning about B is measured by the mutual information I(A;B). The tradeoff between this and how easy the test is to perform is left up to you.
Here is a brief overview of the subject. As far as notation goes: I want to distinguish the test A from its outcomes, which I will denote a and -a.
The information content I(a) from an outcome given by the formula I(a) = - log Pr[a]. (The log is often taken to be base 2, in which case the units of information are bits.) The formula is motivated by our desire that if tests A1, A2 are independent, I(a1 and a2) = I(a1) + I(a2); the information we gain from learning both outcomes at once is the sum of the information learned from each outcome separately.
The entropy of a test A is the expected information content from learning its outcome: H(A) = I(a) Pr[a] + I(-a) Pr[-a]. Intuitively, it measures our uncertainty about the outcome of A; it is maximized (at 1 bit) when a and -a are equally likely, and approaches 0 when either a or -a approaches certainty. Ultimately, H(B) is the parameter you’re trying to reduce in this problem.
We can easily condition on an outcome: H(B|a) is given by replacing all probabilities with conditional ones. It is our (remaining) uncertainty about B if we learn that a was the outcome of test A.
The conditional entropy H(B|A) is the expected value H(B|a) Pr[A] + H(B|-a) Pr[-a]. In other words, this is the expected uncertainty remaining about B after performing test A.
Finally, the mutual information I(A;B) = H(B) - H(B|A) measures the reduction in uncertainty about B from performing test A. As a result, it is a measure of the value of test A for learning about B. Irrelevantly but cutely, it is symmetric: I(A;B) = I(B;A).
...so if I perform tests A and B simultaneously 25 times and out of 25 of them obtain Pr[a], Pr[-a], Pr[b] and Pr[-b] and calculate I(A;B), and THEN I look at the results for A26, I should be able to predict B26, right? And if I(A;B)>I(C;B)>I(D;B), then I take test A as the most useful predictor? But if the set from which the sample was taken is large, and probably heterogenous, and there might be other factors I haven’t included in my analysis, then the test A might mislead me about the outcome of B. (Which will be Bayesian evidence, if it happens.) How many iterations should I run? Is there a rule of thumb? Thank you for such helpful answers.
So there’s two potential sources of error in estimating I(A;B) from sample data:
The sample I(A;B) is a biased estimator of the true value of I(A;B), and will see slight patterns when there are none. (See this blog post, for example, for more information.)
Plus, of course, the sample will deviate slightly even from its expected value, so some tests will get “luckier” values than others.
Experimentally (I did a simulation), both of these have an effect on the order of 1/N, where N is the number of trials. So if you were comparing a relatively small number of tests, you should run enough iterations that 1/N is insignificant relative to whatever values of mutual information you end up obtaining. (These will be between 0 and 1, but may vary depending on how good your tests are.)
If you have a large number of tests to compare, you run into a third issue:
Although for the typical test, the error is on the order of 1/N, the error for the most misestimated test may be much larger; if that error exceeds the typical value of mutual information, the tests ranked most useful will merely be the ones most misestimated.
Not knowing how errors in mutual information estimates tend to be distributed, I would reason from Chebyshev’s inequality, which makes no assumptions about this. It suggests that the error should be multiplied by sqrt(T), where T is the number of tests, giving us an error on the order of sqrt(T)/N. So make N large enough that this is small.
Independently of the above, I suggest making up a toy model of your problem, in which you know the true value of all the tests and can run a simulation with a number of iterations that would be prohibitive in the real world. This will give you an idea of what to expect.
Oh, thank you. This was immensely useful. I now will pick some other object of study, and limit myself to a few tests (about 8). I kinda suspected I’ll have to obtain data for as many populations as possible, to estimate between-population variation, and for as many trial specimens as possible, but I didn’t know exactly how to check it for efficiency. Happy winter holidays to you!
Is it less annoying if, instead of asking to borrow your brains, I propose to play DM in a tabletop campaign where you’re playing a character who looks suspiciously like me? Is it better if there are actual Akratic Goblins instead of just keeping things 99% realistic? (Couching it in Dungeon Punk metaphors might kinda break the point, which is to trick my brain into doing a better job at communicating the problem(s), and to trick at least one stronger brain into taking the challenge so’s I can borrow the winning strategy. But if there’s a necessary engagement tradeoff to overcome my general lameness, then I should try to make it?)
You might get more answers to this question if it was more easily understandable. I can tell that you’re trying to communicate through metaphor, but it’s so opaque that it’s more like a riddle than a question. Do you think you could restate your question in more direct language (or with another metaphor to give us something to triangulate with, if you’re more comfortable with that)?
No, I’d say it’s more annoying. I enjoy helping people with their problems (it’s why I spend so much time on Stack Overflow, even though it’s probably not the best use of my time). I hate playing bad RPGs.
I’m not sure if I understand the question, either. But, if there’s someone whose brain you’ve already asked to borrow before, asking them whether it’s annoying is probably a thing you should do (assuming you’re similar to me).
I tend to feel like I’m asking for free stuff and offering nothing in return when I ask for advice, especially from people who I don’t know well (or don’t know at all—like forums I’ve just joined). But, most people who I ask for advice don’t act like I’m wasting their time, and the times I’ve asked whether they mind, they usually say they don’t mind or they like being helpful for the sake of it. And when other people ask me for advice, I tend to enjoy the conversation if it’s a topic I have any knowledge/experience in, and sometimes it’s sort of flattering to be asked. So, it looks to me like mentoring people is something that most humans enjoy doing, but some humans don’t ask for it as often as we should because we mistakenly think it’s a burdensome request.
(Also, as a data point, straightforward brain borrowing feels like a better offer than RPG playing. Probably because lending brains feels like building social capital and doing something useful, but RPG format registers as self-indulgent thing that I don’t have time for, and unlikely to be as fun as just talking about it would be.)
Having at least a friendly acquaintanceship with the person to begin with does make it easier to ask, though. Getting to know intimidating people who seem like they’re a level above yours, and possibly actually are, is a difficult thing that I don’t know much about myself. Though it’s probably more about dealing with one’s own nervousness and awkwardness than anything—the higher level person will probably see it as ordinary making friends. And some people don’t mind strangers asking them for help, so even if you don’t know the person you want to talk to, it could still be worth asking. Or you can ask in a group context, like a forum, and see who shows up.
Speaking of which, I’m gonna make myself ask the bootstrap startup forum for help by the end of the day today, because your post reminded me that I really need the advice and really shouldn’t be so scared of offending them by being too new and unknowable to reciprocate right away.
If I add a comment to an old Sequence article or respond to a comment on an old Sequence article, will my comment get noticed by anyone? I’ve seen comments added years later that did seem to get noticed, but how do people notice them?
They pop up on top of the “recent comments” list. Enough people read LW comments via this list (there are actually two, one for Main and one for Discussion).
In addition people get a notification when someone responds directly to a comment they made
Meta: I didn’t read this thread for some time and only later got back to it (when I ran out of comments :-). On reflection I think it is due to the negative connotations with ‘stupid’. We can hope that everyone here is immune to the halo effect. Or we can look for a better title.
‘Simple questions’
‘Quick questions’ (implies urgency)
‘Naive questions’ (still connotations but not so strong ones)
‘Less available Answers’ (putting the idea on its head)
The reason for calling it stupid questions is to reassure people that they won’t lose reputation or be insulted for asking questions that might be obvious to other people.
OK. I understand that. Makes sense. The simple/quick questions presumably go to the open thread. But the approach still reduced the number of readers of these questions.
Maybe, but previous iterations of this thread have gotten hundreds of comments so I don’t think it’s a big deal.
For being Respectable, I liked the older thread’s title ‘Procedural Knowledge Gaps’ (1) (2). That is a narrower topic, though (but it’s the topic I’m reading this thread for anyway).
Is the following (conspiracy?) theory implausible? Could it be that Google and the other big players are well aware of the dangers of AI and are actively working to thwart the dangers internally? They could present an image of no concern so as to not spark any government intervention into their affairs or public concern getting in the way of their business plans. But should we not believe that they have considered very thoroughly whether there is immanent danger from AI, and are running their business in accordance to the resulting utility function?
Isn’t viewing the present situation with the big AI players as a tragedy of the commons scenario too simplistic? Even from say Google’s selfish perspective, should it not be working to stave off an AI apocalypse?
I believe Google has an AI ethics board due to MIRI/FHI/&c. influence. If you are correct about their long game, then MIRI&c almost certainly knows of it, and perhaps you could test out the hypothesis by looking for shadows in those organizations’ strategy reports or something if you’re truly interested. May or may not be a great idea to shout the results from the rooftops, however.
When Google acquired DeepMind part of the deal was that Google get’s an ethics boards. The impression I got, was that the purpose of that board is that DeepMind doesn’t evolve into a UFAI.
Jaan Tallinn who made his fortune in Skype is one of the big donors for MIRI and was also a huge investor in DeepMind. I would imagine that he and a few other people who are less public are responsible for demanding the board.
I think every MIRI workshop did include someone from Google.
Luke did write somewhere that MIRI got six-figures in it’s lifetime from Google matching funds for employee donations.
On the other hand Google is a big corporation. “well aware” is not precise term when thinking about giant companies.
The most straightforward explanation of Google’s behavior is:
As you say, they have considered very thoroughly whether there is imminent danger from AI, and are running their business accordingly.
The conclusion of their consideration is that there is no imminent danger from AI.
You can’t assume that the danger from AI is so definitely true that anyone competent who considers it will come out agreeing that there is danger.
Who has given the issue serious consideration? The only example I can think of of someone giving it serious consideration and concluding we don’t need to worry is Robin Hanson, but I really have no idea how to identify, or even estimate the number of people who seriously considered the issue, decided there was nothing to worry about and then went about their lives without mentioning it. Any thoughts on how to approach the question?
It would be special pleading to bring up “Google has seriously considered it” when that is part of “Google has seriously considered it and is hiding it”, yet to not bring that up when it is part of “Google has seriously considered it and has decided it’s nothing to worry about”.
It is of course possible that Google has not considered it at all, but that would apply to Rasputin496′s original suggestion as much as it applies to mine, so mine still would be a more straightforward explanaion than his, even if it’s not the most straightforward explanation on an absolute level.
(If you think I shouldn’t have used the word “most”, but should have said something like “most out of all other explanations that make the same assumptions”, then sure, I’ll accept that.)
How do you approach a huge pedantic writing project like a thesis or a review paper? Despite reading some self help on the on the subject, I feel completely stuck, overwhelmed and don’t know where to begin each day. If I manage to do some part of the project I do it way too thoroughly and waste time. I don’t seem to have terrible problems with akrasia with other kinds of projects, it’s just that I’ve never done anything this big that requires intense self monitoring from the beginning to the end.
Examples of how you approach different kinds of big projects like programming an application are welcome too.
I break the project down into small sections (I hate writing long things, and if I had written my book as a book, rather than as chapters, each divided into ~1k word sections, I would not have written it). So, the first question is how it makes sense to start divvying up.
For the book, I knew what the different chapters would be (Rosary, Divine Office, Examen, etc), so I made a Freemind diagram of all the points/ideas/etc I expected to use in each of those sections. (And that just needed to be enough of a handle for me to remember what I was talking about. Seeing a sub-bullet under “Confession” that said “Tam Lin” probably wouldn’t be much help to anyone else!).
So, when I worked on the book, I wasn’t working on the whole book. I just needed to turn “Confession → Tam Lin” from notation into text.
I did a similar thing with my college thesis, where I started by grabbing references, dumped them all into a doc, moved them around so they were grouped together in categories like “Human Flesh Search” “Gov’t Using Collective Score-Settling” etc, and then worked on individual sections.
I thought of what I was doing as a very small task, just filling in connective tissue between citations and examples. This tends to help me a lot.
I think the other benefit of the “break it down” approach is that you don’t wind up with a blank document thinking “What will I do?” You sit down saying “Ok, today, I need to explain why we should approach Confession in the same spirit as Janet did her rescue of Tam Lin.” Writing and choosing what to write work better for me if they are two separate tasks.
I do this sort of thing by starting as broadly as possible. Assuming you already have the majority of the information you need (ie, the research phase is more or less over), you should be able to sit for 15 minutes or so and make an albeit disorganised list of broad themes that you want to include in the paper. Concentrate during this phase on making the list, not evaluating what you put on it (some things will turn out to be irrelevant, some will be duplicates or link closely with each other or spark new interesting ideas—but make an effort to ignore all this at this point).
Once you’ve got your list, you can spend some time ordering it (so that closely linked items follow each other), discarding items that turn out not to fit in, and so on. Try and stay broad at this point (though you can jot down elsewhere more detailed points that you might want to make, if they occur to you; and it’s fine to add new items that get sparked). This process should help you figure out what your “narrative arc” might be, what conclusion you’re working towards, and so on. If something seems important but confusing, it might mean you need to do more research for that section.
Now you’ve got an outline that essentially consists of chapter or section headings, and some ideas for what’s going in your introduction and conclusion. Depending on the length of the paper, you might want to do another outlining step in greater detail (listing your points in each section, but still not actually filling out the writing), or you could start writing now. I tend to work on writing the sections that seem easiest first, and then join up the more difficult bits afterwards.
Inevitably, it turns out during this phase that some of the links are weak or disjointed, some of the arguments I originally intended to make are poor, and the conclusion I thought I was heading for is actually not quite where I end up going. So lots of adjustments to the outline take place and the whole thing needs a good rejig at the end. But editing is straightforward enough when you have an already extant text to work on!
That’s the sort of process that works (usually, well enough) for me: I’m sure others do it completely differently. Maybe you can pick out some stuff that seems useful in there, though.
On the akrasia level, I find that the harder the task seems, the more frequent “reward” hits I need for working on it. For me, these hits mainly consist of getting to cross an item off my to-do list. So if I’m really struggling with a paragraph, my to-do list can contain such fine-grained items as “Think about the structure of [paragraph x]”, and “Write a sentence explaining how RelevantAuthor (2012) is relevant here”. Even a poor effort at doing these things gets the item crossed off (though if it still needs re-doing or more work, it will of course get put on again).
Have you found any good solutions besides the ones already mentioned?
A few premises
a) Some animals matter
b) Not all animals matter, some extremely simple animals don’t matter.
c) There are anti-correlations of the form: the more cows there are, the fewer insects (or rodents) there are. - that hold true in our world in which one of the species is substantially more cognitively capable than the other.
d) There is currently no consensus on how simple a mind or cognitive system has to be for it to matter. Arguably this consensus cannot be reached, since different hypotheses will use different correlates to try and find the morally worthy thing.
Conclusion
e) We do not know now, and won’t know in the medium future whether increasing or decreasing consumption of farm animals is desirable from an utilitarian perspective.
I don’t know where this argument fails. I’ve shown it to many EA’s and no one saw a big problem so far. However, some people think this is just stupid, and I’m happy to see it proven wrong.
To say the same in less abstract form: World 1 and World 2 have the same amount of land. In World 1, people eat cows and raise cows, so there are 100 cows and 1000 insects. In World 2 people are vegan and there is more forest land, World 2 has 10000 insects and no cows. Which world is ethically better seems to hinge on the comparative moral worth of insects and cows. Given we don’t know what it’s like to be a bat, or a cow, or a bumblebee, we cannot decide which world is ethically more desirable. Therefore we have no reason to direct our actions to make our world more like 1 or 2.
You can change insects (or rodents) and cows for any pair of animals that are anti-correlated in nature, and cognitively dissimilar, and where the size of the anti-correlation is larger than your certainty about which animal is more morally worthy.
Animals that matter may be better off not existing than living in suffering. Cows having greater moral worth might be exactly that which means you would rather not create and torment them, as opposed to insects.
On the other hand, see http://foundational-research.org/publications/importance-of-wild-animal-suffering/ , which argues that insects have lives with negative value because of suffering. In that case, World 2 is strictly worse than world 1: fewer cows (who do have lives with value) and more insects (whose lives have negative value).
Drethelin and Jiro, I was taking for granted—because it is a common opinion among vegans—that cows lives are morally unworthy, and that if insects lives are like anything at all, they are awful.
The reasoning works in all the four cases. Animal 1 has + life and animal 2 + life and they are anti correlated in nature, both are negative and anti-correlated, or when the signals are different 1 is + 2 is—and they are positively correlated.
EDIT: After discussing this here I had a long discussion over email about this with two EA’s, and decided to put forth my final arguments:
I’ll give my best shot. It is also my final shot. If it is not persuasive, I may give up on the task entirely. (because I have a book on altruism and the world has a Superintelligence coming soon, and I feel we are reaching marginal levels of opinion change)
1) My argument is heavily reliant on the idea that any attempt to disrupt the static friction of whatever food habits people do have will require a lot of momentum. So the fact that we are stuck in a random local non-optimum is a feature of the argument, not a failure. Same would go against people who were arguing in favor of speaking Esperanto, or re-establishing the rationalist community in Palau (there are some posts about this around). It’s the static friction that matters. I think we are stuck with Qwerty until the Intelligence Explosion. I think we are stuck with some distribution of vegans, vegetarians and causal-assassins until the IE.
2) I’m not out to end veganism and vegetarianism on the grounds that there is a lot of uncertainty that won’t be solved before the IE. I’m just out to end high status people within the EA community trying to make people change in either direction. I’m out to save EA time and attention. I’m out on this topic because I’ve seen countless hours of discussions between really smart productive awesome world-saving people, in Brazil, the UK and the US where time was being dedicated to this, as if it were super clear cut net good, when it in fact isn’t. Not as good as increasing insight, coordination, cooperation, control, safety savvy AI tech, building community, getting the order right, differential progress or fundraising, for instance.
3) Maybe I should not worry. Maybe there are cheaper hours than those dedicated to veganism to put to good use among the high intellectuals. But dietary habits have a 2 thousand year old tradition of being used as shibboleths, as implicit markers that determine friend from foe. And once again, as I told you here: As long as both teams continue in this lifelong quest together, and as long as both shut up and multiply, it doesn’t matter. At the end of the day, we (have reason to) act alike. I just want to make sure that we get as many as possible, as strong as possible, and set the controls for the heart of the sun.
4) Because I’m out against public veganism advocacy within EAs, LWs, CFARs, I’ve advocated in the past for Private Veganism, for outsourcing vegetarianism, and even, on the same post, for pandemizing veganism, like a vaccine in the water supply. I like animals. I just like future animals as much as I like current animals, so if animals are stealing attention away from my FAI friends, like you guys, I’ll make my stand “against” them, for them.
5) Bostrom puts it clearly. I cite Peter Singer (forthcoming) (I can’t cite him here on LW because it’s unpublished, sorry) Ommitted text in which Singer quotes Bostrom. (my emphasis) The point here being that presentations or posts within the EA community do not increase number of EA’s, they only scatter EA time, which is in part—along with the redbull+rockstar drink—why I felt so averse to that seemingly harmless presentation.
6) Yudkowsky puts it ironically: Eliezer Yudkowsky “Okay, so all of those risks should affect 4e20 stars which should beat the present value of all human and animal life on the surface of one planet making inefficient use of around a millionth of the output of one star. I do understand that this perspective may sear away some people’s souls, but in reality we are a tiny little blue speck containing a little tribe of tiny people (and animals), a tiny blue speck from which hangs, downward in Time, a vast heavily-populated world. A world of people who are helpless, who have no voices that can move upward and reach the tiny blue speck, who can only look up desperately up at the tiny blue speck and hope we don’t screw up, because if that tiny blue speck snaps, their whole huge world will drop out of Time into the void of never-existed. The joys and sorrows of the village of tiny people and animals on that tiny blue speck don’t matter very much compared to the sheer terror of dropping the entire heavily-populated civilization that is, somehow, hanging from that tiny blue speck. They cannot speak for themselves, so I try to speak for them.”
That is my case against public advocacy of dietary habits on moral grounds, it is similar, though shorter-sighted, to Paul’s “Against Moral Advocacy” at Rational Altruist. I do not have intentions of pascal mugging, or pascal wagering, and I’m willing and able to change my mind about these topics. But I find the force of these arguments (those here plus the one on top) to be overwhelming.
See also: http://reflectivedisequilibrium.blogspot.com/2013/07/vegan-advocacy-and-pessimism-about-wild.html
Do we know what the relative moral worth of cows and insects is? No. But we can make a best guess based on the available evidence, the same way we do with any other kind of uncertainty. It seems to me like this argument is just “we can’t be certain about anything, therefore we have no basis on which to choose one action over another”, dressed up a little.
You assume that this moral worth objectively exists waiting to be discovered and known.
No I don’t, it can be subjective and the argument still goes through.
If you comply with the VNM axioms then you have an (effective) utility function and so that moral worth is calculable. And if you don’t follow those axioms you get dutch-booked.
How does that work? Alice thinks the moral worth of a cow is high enough not to mistreat or eat it. Bob thinks that the moral worth of a cow is zero and cares only about the quality of his steak. How are you going to reconcile their views (or even estimates)?
I’m not. Each of them has a moral opinion and knows what they believe to be the right action. Their disagreement is an ordinary moral disagreement; there are plenty of other moral questions where there is no consensus.
So in this context what does knowing “the relative moral worth of cows and insects” mean?
The same thing as knowing how delicious a certain food is.
Sure, but then there is no problem in knowing, ever. You said “we don’t know, but we can make an estimate” and with respect to my personal opinion about how delicious a certain food is, I have immediate direct knowledge and no need for estimates.
I’ve read the posts upon group selection and decided this is kind of relevant. I don’t know math deeply, so this will be qualitative.
There are mutually beneficial symbioses—flowers + pollinators, roots + mycorrhiza, that kind of thing. They are robust not because they allow for altruism, but because they are mutually beneficial (usually). I think it is worthwhile to look into the meta-question of how to measure their efficiency, if AI ecology is comparable to non-intelligent things’ ecology. Is fitness of symbiosis equal to the sum of (maybe weighted) the partners’ fitnesses (which might be hard enough to measure)? What do you think? Is it time scale dependent? What if there are multiple AIs (and each of them does care about itself, but cannot directly eliminate any other, only ‘crowd it out’?)
Suppose there are two largely interdependent kinds of agents, F and P. F between themselves compete for resources, and so do P. F and P exchange resources at certain rates and are relatively ‘loyal’. (Though some P parasitize upon other P and so have only vicarious links with F). They promote each other’s survival, both short-term (bad season) and long-term (new generation), but a season fatally bad for P usually doesn’t eradicate F. Their ‘loyalty’ is not absolute, e.g. P1 will only form liaisons with F1, but F1 can also work with P2-P45 simultaneously, though never with P46, and will gain most if it partners with P29. Most P can partner with most G, and vice versa. If an F has already negotiated treaties with several P, it can influence its readiness to make another commitment either way or not at all, but generally, both F and P leave more than enough offspring to start anew each season (iteration). But the iteration’s start and finish are different for each F and P.
Let P be humans and F—AI (originally these were plants and fungi.) How do you compare the efficiency of their partnership? I mean, we don’t need AIs that are not contributing to our welfare, right? Sorry if this is an illegal analogy.
How is the name “Yvain” pronounced? Also, is there any meaning behind it?
What are the arguments for/against owning a house in a given city, vs eg renting, travelling between multiple cities/couch surfing, and other possible ways of living?
The New York Times has a pretty good rent-buy calculator with 21 variables.
Khan Academy has a pretty good introductory rent vs own video. As has been mentioned above, the answer depends heavily on your career and preferences, as well as a bunch of values specific to your circumstance (Khan notes he rigged the numbers so renting came out ahead economically, but he could’ve just as easily come to the opposite conclusion by changing some values. I count something like 7 free variables (rent, house cost, down payment, loan interest, property tax, upkeep, investing returns), not counting the intangibles.)
An argument relevant to men interested in women: to my knowledge, women find men who own homes more attractive than men who rent (who, I’m guessing, are more attractive than couch surfers. According to Mark Manson’s model (which is on empirically sketchier grounds, although he’s the only guy whose dating advice I’ve seen unanimously recommended in the LWsphere), being a couch surfer will do little to impair a man’s ability to have one-night stands, but will greatly impair his ability for anything longer-term.)
Depends wildly on your career and preferences. This is too big a stupid question to give one stupid answer to.
Pro: long-term stability for oneself or the family, if you have one, more control over your living space, real estate is potentially a good (and a more disciplined) investment, mortgage interest tax deduction (in the US).
Con: flip-sides: tied to a specific area, illiquidity of your capital, being forced to sell or rent out, possibly at a loss, if you cannot afford to pay the mortgage or have to move for other reasons.
The standard rule of thumb “buy a place when preparing to start a family” probably still works 80% of the time.
To further clarify: I mostly meant non-monetary arguments. EG: If you own a house you will have to do upkeep on it vs if you rent an apartment the landlord will be in charge of it. Quality of Life things.
Renting :
Landlord covers most maintenance. Most rental apartments will even mow the lawn and shovel snow, but .even renting a home will usually cover any serious elbow work. If the gutters leak and a kid throws a baseball through your window, you call the landlord and it gets handled. This isn’t just monetary : it saves you from having to spend most of the time hunting these things down.
Mobility. If you got the job of your dreams and the love of your life half-way across the country, leaving a rental apartment or house requires days of work, where a purchased home may take months or even years to sell and absorb hours of your life monthly for that whole time period.
It’s usually easier to rent closer to your workplace or desired location.
Renter’s Insurance is usually easier to get and cheaper than home owner’s insurance (because it needs to cover fewer things).
Some places may come with appliances, and rental apartments usually have easier access to home services. This isn’t monetary—you’re paying rent for those appliances, efficient market hypothesis—but it saves you from having to possess a refrigerator or washing machine and dryer.
Housing :
You get to possess a refrigerator or washing machine and dryer, so you’re not stuck with a fridge built in the 1970s, or looking for a laundromat and twenty bucks in quarters.
Home improvement. It’s both possible and reasonable to make changes to a house you own. You’re responsible for making sure the water heater doesn’t explode, but this means you can replace the water heater if it’s the size of a small bucket. You’re responsible for cleaning gutters, but this makes sure that it actually happens. This may be more responsive, as well, especially if you’re reasonably handy.
Privacy. Even when renting a house, your landlord will almost always have the contractual ability to enter the building on fairly short notice (usually a day, sometimes less). Even good apartments let you hear more of your neighbors than you’d really want, and most apartments will have thinner walls than that. Especially valuable if you have odd hours.
Stability. Prices can change, sometimes significantly if you have a low-down payment mortgage, but the risks of being required by law to move are much lower if you own your home. It’s rare to be able to rent a house for more than five years, and you’ll usually pay a premium if you try. Likewise, things around you change more slowly : the friendly next-door neighbor is not likely to move out and be replaced by a bunch of college grads in housing areas, just because of how the transaction costs work.
Status.
No landlord. Rental agreements often have various levels of surprisingly strict regulations on behavior. If you want unusual pets, or satellite television, or a garden, you may well have to purchase to have the option.
Some reading : pro-rent and pro-purchase.
Is breaking a Karma score of 1,000 still considered a threshold of becoming a ‘real’ member of LW?
I’ve got over 1000 and pretty much feel like an observer that periodically gets massively upvoted for injecting biology and astronomy information and analysis, which just happens to more than cancel out my negative karma from frequent jibes at local phenomena I find amusing or troubling.
Thanks for your contributions!
One way you’re a ‘real’ member is that I recognize your username and remember you as a distinct user who tends to write about specific stuff. Random new users are just indistinct name mass until they show up often enough to start getting recognized.
Though it probably makes you become a real member faster when your username helpfully describes the stuff you know and will be talking about.
Based on a quick look around this thread, 1000 karma does seem like a reasonably good rule of thumb threshold between an username being obscure and at least vaguely familiar. On the other hand, DataPacRat apparently only went over 1000 just recently but has been making discussion posts for years, so I’ve recognized their username for a long time now.
http://www.lewissociety.org/innerring.php
Don’t we have a list of “useful concepts” somewhere? This should go there.
C.S. Lewis: Secret Silver Slytherin?
Immediately reminded of that Slate Star Codex discussion of whether nerds (or other analogous groups) are more immune to status chasing than normal people.
I have ~200 Karma and feel like a ‘real’ (albeit quiet) member of LW, so… no? Also, this is the first time I’ve heard of the 1,000 threshold, and I’ve been at least lurking a bit more than a year.
I could be confabulating, but I have a memory of that threshold being involved in an survey from a few years ago.
A quick C-f through the surveys (or, at least, Yvain’s censuses) would indicate the threshold was 100 in 2012.
I run meetups in my city, which I guess probably makes me a ‘real’ member, and only have ~200 karma (in fact, I only broke that threshold by getting karma for taking the census).
I remember that it used to be 100 Karma, although this was when the community was much smaller. Also, it was mostly used as a rule of thumb/heuristic for the personal accountability of one’s posts (e.g. some people would withold a downvote the person posting it was new to the community).
I’m suddenly reminded of a tradition on Slashdot, where someone with, say, a six-digit userID mentions how the place is going downhill, only for someone with a five-digit userID to pipe up, then someone with a three-digit userID to post anything at all...
Ha, rereading my comment I see that it may sound pretentious, but this wasn’t my intention. Reading your comment just triggered this random factoid stored in my mind :)
For what it’s worth, I don’t recall ever seeing karma=1000 as a threshold of any importance, either for myself or for others. I’m not sure the idea of a “real” member makes much sense either.
I suppose I make some distinction between usernames I recognize immediately and those I don’t, which probably correlates pretty well with karma. And maybe between ones for which I have a reasonable idea of what sort of thing the user tends to post about, what kind of positions they take, how impressed (or not) I’ve been with their thinking, etc., and those for which I haven’t. That probably correlates with karma too. Perhaps the first one is kinda sorta a bit like karma>=1000 and the second is kinda sorta a bit like karma>=5000 or thereabouts—but those numbers are completely made up and the correlation isn’t really good enough for them to make a lot of sense.
No. You should either learn a bunch more math, or show the scars on your soul from demon-summoning. Or go to a bunch of in-person meetups and a CFAR workshop.
...no, the bite marks are not from a succubus, that’s the scratches on the back, the bite marks are from when I let my attention wander from a kitsune for a second...
:-D
Succubi and kitsune? You have terrible taste in demons.
Creatures like mariliths and glabrezus I don’t allow close enough to leave marks :-P
I have a bit more than that, and I only self-identify as an LW regular, not any kind of a “real member”. At a guess, you are not a real member until you stop looking at your karma, except maybe as a feedback on individual posts and comments.
I’ve had that thousand number stuck in my mind for some time now, so I suppose now that I’ve asked the question, I can actually stop looking at that total, and just try to micro-manage each individual post to maximize the karma it picks up.
… Or not, since I’ve got other things to spend my time on.
:)
I’m close to breaking a thousand*, which feels exciting, but I don’t think of myself as becoming a particularly “real” member. Maybe when I start going to meetups.
*edit: not as close as I thought
Depends on what you mean by ‘Real’ member. Karma is roughly correlated with comment/posting volume so I think 1000 is a decent threshold for “regularly says smart enough things” but plenty of people who are steeped in LW worldview almost never post to LW.
I am not sure what does “a real member” mean. So far no one tried to collect any dues from me or issue me a membership card. Like shminux, I would probably call myself “a regular”.
I celebrated breaking 1337 more.
By some coincidence, I spent a long time on that particular score, too.
Meh, I could not remember where I was in relation to this threshhold without scrolling up, and I don’t open tabs to check people’s karma while I’m reading and commenting.
I have over 1000 and feel like a welcome guest, but not a real member. The score is from asking vaguely interesting questions, rehashing old LW arguments, a minimal amount of original thought and losing a couple of points from complaining about LW phenomena.
If one were to accept a dualistic interpretation of the hard problem of consciousness, to what extent would or would that not increase the probability of some aspects related to theism (e.g., that there may exist some completely non-material conscious being(s), that said being(s) may have power to interact with the world arbitrarily, that the non-material aspect of the mind might survive death, etc.)? I’m having a hard time wrapping my head around this. Specific references appreciated.
I notice that I become more attached to LessWrong (again). I habitually open the LW page when I start PC work and check for new posts and messages (and karma). The last time this happened I controlled it by placing ‘minor inconvenences’ around LW (block in /etc/hosts). Do others notice this too? Is this normal in some way? What should I do about this?
My google skills totally failed me and I know this should be easy but...
I read a story years ago about an emperor and shogun. The emperor was nice and the shogun was harsh, and when they switched places the people loved the shogun and respected him but hated the emperor. I’m telling it poorly.
Anyway I have been totally unable to find it again to cite or pass on. Anyone willing to lend me their Google mastery? Feel free to make fun of me for how easy it was for you ;)
You can also try reddit’s TipOfMyTongue subreddit
I can’t help you directly, but TVTropes’ You Know That Show is really good at finding these kinds of things.
I want to start following blogs across several platforms, by being able to view their new posts in one place. From what I can tell, I think this means I want an RSS reader. I use a Windows laptop that’s not too old- does the computer, or Google, come with one? (This seems like something either ought to, but I don’t know.) Do I have to download one, and if so, is there a best one?
I use http://feedly.com/ , mostly because I had a robust RSS feed when Google used to come with one, and Feedly volunteered to transfer over RSS accounts when it was shutting down to capture users. I don’t have any complaints.
How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don’t know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.
Obviously, when I’m awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset—I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.
I’m definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?
I’m pretty sure I’d be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?
I think your answer is in The Domain of Your Utility Function. That post isn’t specifically about cryonics, but is about how you can care about possible futures in which you will be dead. If you understand both of the perspectives therein and are still confused, then I can elaborate.
Can I learn to consciously lower my heart rate? I don’t think there’s anything wrong with my heart rate generally, but I’ve been having problems for the last few months with my pulse skyrocketing in the very specific context of the pre-blood-donation screening,and there doesn’t seem to be anything I can do to stop it. I think I may have become entrenched in a horrible feedback loop of knowing I’m going to fail that part of the screening, causing me to be nervous about it, causing my pulse to speed up, causing me to fail. I’ve toyed with various relaxation techniques, but they don’t seem to help very much, and I don’t actually feel particularly nervous while my pulse is being taken, so that might be the wrong approach entirely. I only need to be able to keep below 100bpm for about a minute. Any advice or ideas about what might be going on would be greatly appreciated
I’ve read that the CEO of Levi’s recommends washing jeans very infrequently.
Won’t they smell? I have a pretty clean white-collar lifestyle, but I’m concerned about wearing mine even once or twice between machine washing. Is it considered socially acceptable to re-wear jeans?
I’ve found the results are very different between living in a hot city and a cold city. When I lived in a hot, coastal city, I was never able to rewear anything; in my current cold city I can (theoretically, mind you) go days without a shower.
I usually wear jeans about three, sometimes four, times between washes. I haven’t noticed any smell, and haven’t heard any complaints either.
Not sure if it’s in addition to what you’re thinking of or it is what you’re thinking of, but Tommy Hilfiger ‘never’ ‘washes his Levis’. I heard this and confirmed with a fashion- and clothing-conscious friend that they (the friend) had tried it. I used to wash jeans and chinos after a few consecutive days of wearing them. For the past five or six weeks I’ve been trying out the ‘no wash’ approach. I wore one pair of jeans for about thirty five days (maybe split into two periods of continuous wearing) and washed them probably once or never during that time. So far as I could tell they did not smell anywhere near enough to be offensive, and I only stopped wearing them because I got too small for them. This included doing some form of exercise like pushups, circuits, or timed runs at the track in the jeans (and then not showering for a few hours afterwards) on most days.
After those jeans I’ve been wearing the same pair of chinos for eight days and they seem to be fine. It’s worth giving a try to see if it works for you too, in your circumstances. It is very plausible that climate, bathing frequency, sensitivity to own sweat, sensitivity to laundry products, underpants use etc. provide enough variation between people that doing it is a no-brainer for some and not doing it is probably right for others.
During this period, before showering each night, I take the trousers off, shake them off, then (assuming I don’t have any reason to think the outside of them had accumulated much ickiness during that day) drape them inside out over a chair, which hopefully lets them air out and let moisture evaporate off. (In fact, I now do this with most of my clothes, and it seems like it might indeed make them smell fresher for longer.)
https://www.google.co.uk/search?q=tommy+hilfiger+wash+jeans http://www.dailymail.co.uk/femail/article-2459720/Tommy-Hilfiger-thinks-crazy-throw-jeans-laundry-wear.html
Most jeans you aren’t supposed to wash too often, since it can make them fade and wear out faster, but you still want to wash them when they seem dirty like when you spill on them, get really sweaty in them, or if they smell. I wash mine every 3-5 wears which seems to be a good amount.
If you’re wearing raw denim, you’re not supposed to wash your jeans at all but raw denim is kind of a niche thing anyways.
I can see how freezing might help with smell, but what confuses me is sweat. If I wear pants more than about 4 times in a row, they start to itch, and I don’t see how freezing would help with that. I don’t think I sweat unusually much.
I wondered about this too before I tried it. I thought I had a higher-than-average risk of being very sensitive to my own perspirations/sheddings. But I haven’t detected any significant problems on this front after trying it. It goes both ways: Now I know that I’m not very sensitive to my own trouser sweat, it means I can wear trousers longer after they’ve been washed (i.e. exposed to potentially irritant laundry products), which possibly reduces the risk of skin problems from the laundry products (another problem that I think I have a higher-than-average chance of having; the two aren’t mutually exclusive).
(Insert disclaimer about this maybe being very dependent on lots of factors, e.g. maybe I’ll move to another city with an imperceptibly different climate and get screwed over by wearing jeans for more than a day.)
When it comes to sweat different people have different issues.
I remember that the once I gave a PowerPoint Karaoke in front of maybe 80 people at my university. Afterwards I smelt really badly because of stress induced smell.
On the other hand I can dance Salsa and my clothes are wet from sweat because I move that intensely. In my experience that doesn’t lead to clothes that stink the next day.
Some nerdy people with high level of social anxiety sweat more smelly than average.
The only way to know surely whether something smells in a socially unacceptable way is to ask other people. I for example trust my mother to give me honest answers to that question. But you can use any friends for that purpose that you trust to give honest answer and that have decent social skills.
It might also be worth to get opinions from multiple people, as people’s smell receptors differ. Androstenone for example smells musky and pleasant to some people and bad to others.
Who knows if this really works, but the Levi’s Vice President of Women’s Design recommends leaving your jeans in the freezer overnight once a month (to kill smelly bacteria).
The USA puts a pretty unique emphasis on not smelling, so this might be a cultural difference if their upbringing wasn’t in the US.
That’s the upper-class British thing, see Orwell :-)
It also depends on the jeans. Some jeans are, for some reason, more likely to smell after being worn just once. I have no idea why, but several people I know have corroborated this independently.
One thing that can affect this is the material used in the jeans. Typically, a lot of synthetic fabrics tend to start smelling more easily, while wool and silk are known for being naturally odor resistant. This can vary some, but it’s a good general guideline.
Why would people in the 27th Century want to revive other people who went into some kind of biostasis in the 24th Century? Wouldn’t that make the 24th Century people selfish or narcissistic for wanting to take advantage of the more advanced health care of the 27th Century?
In general, Cryonics organizations have funds and people whose duty it is to revive frozen people. It’s not like they’re left in a random ditch to be found. Alcor (I don’t know about the others) tries to only recruit people to this body who have relatives or loved ones in cryo, and are signed up themselves, therefore having an incentive both to unfreeze existing people and to cause the organization to unfreeze people in the future.
But even were they tossed into a ditch, there are several possible reasons:
1: Research. People from 300 years ago will have a wealth of interesting genetic, bacterial, possibly physiological, psychological, and other differences to study. They’ll also have relatively privileged information about archaeological finds and historical manuscripts. I’m not sure how much that part will apply to the future, considering many more records are kept now than were 300 years ago, but the biological parts of the point hold regardless.
Profit: Reviving people from the past (at least the first time) would make whoever did it famous and respected, which will lead to more grants and other forms of remuneration. You could also put those weird primitives on a reality tv show.
Charity: People regularly pay to save stray dogs, cats, etc. from starving to death outside. They not only usually don’t receive anything out of this, but in fact contribute to overpopulation of said animals as well as losing money. It would surprise me, assuming the process was not too costly, if there was no one in the next few centuries who felt it their calling to reanimate the frozen altruistically.
Experimentation. To perfect a revival procedure, you need to test it out on people. Depending on the availability of future corpsicles, there might be a limited supply, which would bump up people frozen centuries ago into the ranks of the potential scientific revivals.
Your second question is more a matter of morals but plenty of people do selfish things and yet we don’t punish them with death for it.
Sidetrack: If you’re cryonicly revived, what are the odds of getting your gut bacteria back?
I’d imagine that if you can be revived, the reasons you want your gut bacteria back would no longer apply.
What’s your line of thought?
My guess is that if you’re being revived as something much like your current self, you will at least need simulations of gut bacteria.
I’m not sure whether I’m grossly ignorant of the biology here. Supposing they’d still be helpful, would it be important to get your gut bacteria back, rather than some other gut bacteria? Would that be more akin to replacing a kidney, or replacing part of the brain?
Or a simulation of their beneficial effects.
A lot of cryonics is head-only cryonics, so pretty low.
Good point, but what about whole-body cryonics?
I think very high if they’re trying to preserve them, otherwise very low. We know bacteria can survive freezing fairly well in many cases, but if they’re not trying to preserve them I imagine the revival process could be deadly to gut flora.
I think you missed the point of my question. Why does wanting something at time index A, before that something has become technically feasible, make you a bad person, but not at time index B, when that something exists, works and has become socially accepted?
In other words, presumably people in the 24th Century could have the means for reviving cryonauts from the 21st Century, and they could have started to make progress on their own radical life extension as well. Would they go around calling each other narcissists for taking advantage of what they consider the current standard of health care? If not, would they say that about visionary people in biostasis from earlier centuries who expressed the wish to benefit from what the 24th Century people know how to do? Or if some of them survived to the 27th Century and had they friends and relatives in biostasis who wanted revival using the more advanced health care in the latter century, would they say something like, “We’re morally okay with our life extension, but those people we knew a few centuries back in biostasis are selfish for wanting what we have. Screw them.”
In other words, why does wanting something at the “wrong” time reflect badly on your character?
This sounds closer to a rhetorical question or a moan than to an actual question, and specifically moaning about negative perceptions of cryonicists is not a very high-value thing to do in a forum that’s generally pro-cryonics.
I don’t think it does. I’m confused. Am I missing some piece of context?
I’m used to seeing the assertion that cryonic revival almost definitely cannot happen unless a society has already Solved Biology, so the hypothetical “revival but no life extension” society feels contrived. Could the people of the future be hypocrites? Sure, I guess.
no
although an argument can be made that the wrong time is defined in terms of opportunity costs: Wanting something when that money could more easily and better be spent on other things. Some might say it’s wrong to want a brand new car when you could buy a used car that works fine and spend the difference on mosquito nets. Wanting Cryo when it costs 200 grand is arguably more wrong than in the distant future when it costs 10k
Wait a second, who exactly is calling pro-cryo people bad names?