Open thread, September 15-21, 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
I’m posting here on behalf of Brent Dill, known here and elsewhere as ialdabaoth—you may have enjoyed some of his posts. If you read the comments at SSC, you’ll recognize him as a contributor of rare honesty and insight. If you’d had the chance to talk with him as much as I have, you’d know he’s an awesome guy: clever, resourceful, incisive and deeply moral. Many of you see him as admirable, most as relatable, some as a friend, and more, I hope, as a member of our community.
He could use some help.
Until last Thursday he was gainfully employed as a web developer for a community college in Idaho. Recently, he voluntarily mentioned to his boss that he was concerned that seasonal affective disorder was harming his job performance, who mentioned it to his boss, who suggested in all good faith that Brent should talk to HR to see if they might help through their Employee Assistance Program. In Brent’s words: “Instead, HR asked me a lot of pointed questions about when my performance could turn around and whether I wanted to work there, demanded that I come up with all the solutions (after I admitted that I was already out of brainpower and feeling intimidated), and then directed me to turn in my keys and go home, and that HR would call me on Monday to tell me the status of my employment.” Now, at the end of the day Tuesday, they still haven’t let him know what’s happening, but it doesn’t look good.
I think we can agree that this is some of the worst horseshit.
On the other hand, he’s been wanting to get out of Idaho and into a city with an active rationalist community for a while, so in a sense this is an opportunity. Ways to help: Brent needs, in order of priority: a job, a place to stay, and funds to cover living and moving expenses—details below. Signal boosts and messages of support are also helpful and appreciated. Ways NOT to help: Patronizing advice/other-optimizing (useful information is of course welcome), variations on ‘cool story bro’ (the facts here have been corroborated to my satisfaction with hard-to-fake evidence), disrespect in general.
1. Job: Leads and connections would help more than anything else. He’s looking to end up, again, in a good-sized city with an active rationalist community. Candidates include the Bay Area, New York, Boston, Columbus, San Diego, maybe DC or Ann Arbor. He has an excessively complete resume here, but, in short: C#/.NET and SQL developer, also computer game development experience, tabletop board/card game design experience, graphic art and user interface experience, and some team leadership / management experience.
2. Crash space: If you are in one of the above cities, do you have/know of a place for a guy and his cat? How much will it cost, and when will it be available? Probably he’ll ultimately want a roommate situation, but if you’re willing to put him up for a short time that’s also useful information.
3. Funds: Brent is not now in immediate danger of going hungry or homeless, but a couple of months will exhaust his savings, and (although it is hard to know in the current state of things) he has been told that the circumstances constitute “cause” sufficient to keep him from drawing unemployment. Moving will almost certainly cost more than he has on hand. There is a possible future in which he runs out of money stranded in Idaho, which would be not good.
If you feel moved to help, he has set up a gofundme account here. (The goal amount is set at his calculated maximum expenses, but any amount at all would help and be greatly appreciated—he would have preferred not to set a funding goal at all.) Though Brent has pledged to eventually donate double the amount he raises to Effective Altruist causes, we wouldn’t like you to confuse contributing here with charitable giving. Rather, you might want to give in order to show your appreciation for his writing, or to express your solidarity in the struggles and stigma around mental illness, or as a gesture of friendship and community, or just to purchase fuzzies. Also, you can make him do stuff on Youtube, you know, if you want.
Thank you so much for your time and kindness. -Elissa Fleming
Official update: HR “explored every possible option” but “ultimately we have to move forward with your termination process” after “making certain there was unanimous consensus”.
Apparently several people in my now ex-office are upset about this.
Is Austin on the list? I work at a not-evil tech startup called SchoolAdmin that does school admissions software for a mix of public/private/charter schools. We’re not hiring devs right now, but that might possibly change since we have a product manager coming in October. The company is REALLY not evil; we’ve had three different people come down with mental or physical health issues, and the president’s mantra has been ‘your job is to get better’ in every case.
I could possibly also offer a place to crash, I’ve got a futon, a study it could be moved to, and already have cats.
I would recommend Austin as well. There are loads of developer jobs here, though I don’t know any particular place that is hiring right now. We have an active, close-knit rationalist community that I think is pretty fantastic. Worth consideration.
I was going to make a plug for Boston, but with SAD, someplace with a sunny winter like Austin sounds like it might be nicer.
That narrative is unambiguously a case of illegal discrimination. Idaho law Defines:
and
I am also very confused as to how actual HR drones in an actual HR department wouldn’t be familiar with the law and able to create a suitable enough pretext for termination.
I already mentioned the A.D.A. to Ialdabaoth, but fighting a discrimination case probably takes more money than he’s looking to raise to move, as well as being psychologically exhausting.
Either of those reasons is probably enough to convince a rational person. The spirit of Immanuel Genovese still sits on my shoulder screaming “Passive complicity!” at /me/ every time I contemplate accepting an outcome in which it is normal that this kind of treatment happens.
Me too.
The problem is… this is a complex and delicate situation, as all real-life situations are.
There are co-workers who have gone the extra mile to help me and protect me. They didn’t do everything they could, because they have families, and they know that if they rock the boat too hard it will be them, not HR, that get thrown overboard.
They aren’t rationalists themselves (although I was slowly working on one of them), but they are caring and intelligent people who are themselves struggling to find meaning and stability in a harsh world.
If I could find a way to laser-lance out the demons of stupidity from my workplace, I would do so in an instant. If I could do so in a way that could add net funds to my own cause, I would already be doing so.
But as it is, I know exactly who would suffer for it.
(That doesn’t mean that I have committed to a decision yet; I am still weighing necessary evils.)
I hope this is not patronizing advice but rather useful info. To be clear, I am not pressuring you to do anything, I know there are many reasons not to pursue discrimination claims, but I wanted to make sure you are aware of all your options.
The Equal Employment Opportunity Commission is a possibly less costly and less adversarial way of pursuing a discrimination claim. They will investigate independently and try to arrange a settlement if they find discrimination. If settlement is impossible, they may even sue on your behalf. They have won a lot of ADA-related claims. I’m pretty sure they will consult with you for free, so the only initial costs are time and emotional energy.
I’m letting you know about what my shoulder angel/demon is shouting, because if I follow his advice I am not optimizing for giving you good advice.
You can make the empty threat that you will sue if you’re not re-hired. Heck, you could even register a law-firm-y domain and copy some law firm’s website and configure google apps for email and send them some intimidating email “from your lawyer”. You don’t have anything to lose at this point do you?
You might be able to get a lawyer to work on a contingency basis—they only get paid if you win.
Woah, well done everyone who donated so far. I made a small contribution. Moreover, to encourage others and increase the chance the pooled donations reach critical mass, I will top up my donation to 1% of whatever’s been donated by others, up to at least $100 total from me. I encourage others to pledge similarly if you’re also worrying about making a small donation or worrying the campaign won’t reach critical mass.
If 102 people all pledge to donate 1% of everyone else’s total, the consequences could be interesting. (Of course it’s vanishingly unlikely. But pedantic donors might choose to word their pledges carefully.)
I also hope someone can help out with writing a better resume, this one is seriously subpar. A single page of achievements based on http://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-programmer/ might be a start: “describe yourself by what you have accomplished for previously employers vis-a-vis increasing revenues or reducing costs”.
Yes, thanks, this has been discussed elsewhere. (That said I’ll repeat the request to avoid disrespect or patronizingly phrased advice.)
I don’t have any sensible way of learning about current affairs. I don’t consume broadcast or print news. Most news stories reach me through social media, blogs, word of mouth or personal research, and I will independently follow up on the ones I think are worthy of interest. This is nowhere near optimal. It means I will probably find out about innovations in robotic bees before I find out about natural disasters or significant events in world politics.
Regular news outlets seem to be messy, noisy attention traps, rather than the austere factual repositories I wish them to be. Quite importantly, there seems to be a lot of stuff in the news that isn’t actually news. I’m pretty sure smart people with different values will converge on what a lot of this stuff is.
Has this problem been solved already? I’m willing to put in time/effort/money for minimalist, noise-free, sensibly-prioritised news digest that I care about.
ETA: Although I haven’t replied to all these responses individually, they seem very useful and I will be following them up. Thanks!
What sort of current events do you want to find out about how quickly, and why?
You should consider, if you haven’t already, the possibility that the value of learning about such things quickly is almost always almost exactly zero. Suppose e.g. there’s an enormous earthquake half-way around the world from you, and many thousands of people die. That’s a big deal, it’s very important—but what immediate difference should it make to your life?
One possibility: you might send a lot of money to a charity working in the affected place. But it seems unlikely to me that there’s much real difference in practice between doing so on the day of the disaster and doing it a week later.
Another possibility (albeit a kinda callous one): it may come up in conversation and you may not want to sound bad. But I bet that in practice “social media, blogs, word of mouth or personal research” do just fine at keeping you sufficiently up to date that you don’t sound stupid or ignorant. In any case, what you need to know about in order to sound up to date is probably roughly what you get from existing news sources, rather than from a hypothetical new source of genuinely important, sensibly prioritized news.
I appreciate the distinction you make between urgent and non-urgent news.
Finding out about things quickly isn’t necessarily my priority. In fact, one of my problems with “regular” news outlets is that they have poor sense of time sensitivity, and promote news that’s stopped being useful. The value of knowing about Icelandic volcanoes grounding all northern European air traffic is actually very useful to me when it’s just happened, but in a week’s time I may as well read about it on Wikipedia.
I’m more concerned about finding out about things at all. My ad hoc news accretion drops the ball more often than I’d like. My ideal wish-upon-a-star would be a daily digest saying “here are a list of things that have happened today in two sentences or less”. I can then decide whether to follow it up or not.
(I have a secondary motive of wanting to associate events in my memory to improve the granularity of my recall. I know, for example, that Eyjafjallajökull erupting was concurrent with the run-up to the 2010 UK General Election, which helps me position it in time quite accurately, as well as position personal events that I remember happening around the same time.)
Hilariously, a good option for you may be an actual newspaper. Made out of paper.
It comes once a day, it summarizes a few dozen major events in a reasonably succinct way, and many of them try to minimize reporting bias. You could consider specific papers based on size and editorial style (most offer free or cheap trials), and then sign up for a short subscription to see how you like it.
And the greatest advantage is that it has no hyperlinks to click. Thus, you only spend limited time reading it.
But it has a lot of the same stuff you’d have found beyond the hyperlinks—right underneath the headlines, without even needing to click. I’m not sure that’s a win.
No additional clicks from there, though, so still bounded. You can read through all the interesting stories in a paper (I used to do this) and then you’re done; with the web there’s no obvious stopping place.
That hasn’t been my experience with newspapers.
I get The New York Times, and I find it pretty good in those regards (depending on your definition of “reasonably succinct”). And as a bonus, its science reporting is not hair-rippingly terrible at all generally.
I find daily takes up too much time, and the reporting doesn’t have enough distance. So I’d recommend reading a Sunday paper instead—or, better still, a weekly or monthly magazine. If you’re in the UK then Prospect is fantastic; I also read TIME (I’ve heard allegations that the US edition is dumbed down, so try to get a European or Asian edition).
Interesting, so the European edition of TIME is not a complete insult to their readers’ intelligence?
Do you have some examples in mind of things you never found out about but would have been better off for knowing?
(Of course if you literally never found out about something you can’t know. But I’m guessing there are things you did find out about but not until much too late.)
A couple of semi-recent examples would be the referendum on Scottish independence and the Islamic State business in the middle east. I obviously found out about them, but it felt like I found out about them a lot later than I would have liked. It’s not so much that these have an immediate impact on my life (Scottish independence does, but it’s not like I’d be able to remain ignorant by the time it’s resolved), but they’re massive news events that I basically didn’t notice until everyone else was talking about them. This suggests I’m probably missing other events that people aren’t talking about, and that makes me want to up my game.
What about the recent Swedish election results?
Incidentally, it was disturbingly hard to find an article about them that didn’t put a misleading spin on the results.
I can’t find it, but I once read an article from a guy a trust about how he just stopped following news, assuming that if anything sufficiently important happened, he’d find out about it anyway. His quality of life immediately rose. Having followed this approach for a few years now, I would suggest consuming zero news (is minimalist, completely devoid of noise, and exceptionally well-organized).
“Remember, if it’s in the news don’t worry about it. The very definition of news is “something that almost never happens.” When something is so common that it’s no longer news — car crashes, domestic violence — that’s when you should worry about it.”—Bruce Schneier
But rare events matter too. For example, the big news in July 1914 was the outbreak of a massive war involving all the major European powers. I suggest that someone taking Bruce Schneier’s advice (“World wars are rare events, so you don’t need to worry if one breaks out”) is substantially misguided.
This is a very good heuristic but it does have a few exceptions, e.g. astronomical, meteorological, and similar events. Lots of people assume that if the news are talking about the supermoon then it must be an exceedingly unusual event.
I remember Nassim Nicholas Taleb claiming exactly this in an interview a few years ago. He let his friends function as a kind of news filter, assuming that they would probably mention anything sufficiently important for him to know.
I think this is it: http://joel.is/the-power-of-ignoring-mainstream-news/
Wikipedia’s current events portal is relatively minimalist and low-noise. It’s not prioritized very impressively.
The Economist has a Politics This Week and Business This Week section. Both are only a page each and are international in scope.
Get an RSS reader and read only the headlines. That way you can process hundreds of news in a few minutes and only open the ones that seem seriously important.
A trivial inconvenience which could make a huge difference—if there was a software which would put all those headlines in plain-text format, to reduce the temptation of clicking. (There is still google if something is irresistible.)
In a similar vein: How do I find out what to read and what to learn more generally? I don’t care about reading the latest Piketty but I want to read the best summary and interpretation of a philosopher from the last 10 years instead of the original from 500 years ago. Same goes for Physics text books and so on, and literature.
I scan google news headlines, top stories section and click on the items of interest. Yes, there are still attention grabs and non-news, but this is usually fairly clear from the article names.
I don’t think it’s a problem. Social media is good enough to tell you about significant events in world politics.
When a new topic bobbles up were I want to have an informed opinion I found vox.com or the Wikipedia summary to be good.
My foreign news comes almost exclusively from the CFR Daily News Brief, which sounds like exactly what you’re looking for. The daily briefs also link to their Backgrounders, which are excellent and relatively short summaries of the backgrounds to many hot-topic issues.
Did you mean to link here?
Yes, thanks. Apparently didn’t copy/paste correctly. Fixed now.
It was quite an entertaining copy/paste error.
:). Was the CFR stuff what you were looking for?
Oh, yes, thanks. I’m making a collection of news-digesty bookmarks, and it’s in there.
Meh. Sufficiently big natural disasters or political events find a way onto my Facebook feed anyway.
Once in a while when I’m bored I check out the Android app of my country’s wire service (I think the American equivalent would be the Associated Press) and/or the box in the top right of the English Wikipedia’s home page. But it’s a rare week that I spend more than half an hour seeking out news deliberately.
I’m not sure how much one should trust the news filter in one’s country’s wire service.
Trust it for what purposes?
Trust to not be politically biased.
Given the way I use it I don’t care whether they’re politically unbiased, just whether they’re less addictive than blogs and Facebook.
So another voter defects in the rational ignorance collective action problem.
Why would the knowledge of who won the World Cup or how many kids Brad Pitt and Angelina Jolie have any relevance at all when deciding whom to vote for?
(I jest, but LeechBlock is going to get me the hell out of here in a minute and a half so I don’t have time to write a more serious reply.)
In what way? Do you wish you spent more time following current affairs? I don’t follow them, but don’t see any problem with it—if anything, I occasionally have to resist the urge of looking up what’s going on in the world, which I put in the same mental bucket as the urge to look at the top entries of /r/funny.
I don’t think in ten years time having read one more news item on the Gaza Strip will change my life more than having seen one more picture of a cat stuck in a bowl.
(I do however sometimes go more into a binge of “reading up on something and trying to understand it”, but I rely more on Wikipedia than on news for that; “breaking news” tends to repeat the same points over and over again, and doesn’t put much focus on the big picture)
I used to read the wikipedia current events page, which I found a nice summary of what’s going on without going into too many details.
I trust my brain to collect facts and raise them to my attention when they’re important. “Current affairs” describes a class of fact that I don’t think is being adequately collected.
The Wikipedia current events page is a very good example of what I’m looking for.
I get my news from instapundit.
I don’t wish to get into a mindkilling debate about this here, but for sixes-and-sevens benefit, I’ll note that Instapundit is a highly ideological libertarian (alternatively, in the view of many progressives, a partisan Republican pretending to be a libertarian). If you use him as a news source, you should balance with a progressive source.
ETA: This advice holds even if you are skipping narrowly political articles and reading about crises/disasters, etc., since ideology informs what kinds of crises people consider salient.
This looks like the classic grey fallacy.
Looks like, but isn’t. The goal isn’t that you take one viewpoint and take another viewpoint and find “something in the middle”; the point is that having multiple independent viewpoints makes it easier to spot mistakes in each.
It feels natural for us to think critically when our preconceptions are contradicted and to accept information uncritically when our preconceptions are supported. If you want to improve the odds that you’re reading critical thought about any given topic, you need sources with a wide range of different preconceptions.
I agree and wouldn’t have objected if Prismattic advised to read multiple sources from a variety of viewpoints. As it is, he just said “you need to read progressives as well” and that’s a different claim.
I’m not arguing that the views should be averaged, but that the combined sample of news stories will be less likely to suffer from politically motivated selection bias. A libertarian/fusionist source is likely to devote more coverage to, say, stories of government corruption and less to stories of corporate wage theft or environmental degradation; a progressive source to do the opposite. All of those stories might be important (in general or to sixes-and-sevens in particular), so the combined news feed is in that sense better.
So why did you recommend progressives and not, say, news coming from the Roman Catholic Church, from marxists, from PETA, from infowars, from Al-Jazira, etc. etc.?
Well, taking those specific examples as non-rhetorical: PETA, the Catholic Church, and Infowars are various kinds of insane in ways that extend beyond ordinary political mindkilling, so I’d be unlikely to recommend them. Al-Jazeera English is actually pretty good as a news source, but its website is an adjunct of being a broadcast news source, which is less helpful from a time-investment perspective. I predict that a center-left news source will provide coverage on a broader range of issues than a far-left news source, but your mileage may vary.
The center-left source is also most likely to compensate specifically for the coverage holes in a center-right source. That still isn’t averaging their factual claims.
You’re not averaging factual claims, you’re averaging exposure to viewpoints.
I would argue that this summing, not averaging exposure. There’s a difference between saying “You should read both GreenNetNews and BlueCast” and saying “To save time, read GreenNetNews on odd-numbered days and BlueCast on even-numbered days”.
I think it’s averaging because your capacity to absorb news/viewpoints is limited.
Are you using “progressive” to mean left-leaning, or in the usual way? Just for clarity; if you meant the latter disregard.
I thought “left-leaning” was the usual way? What else, in the political sphere, does “progressive” mean?
I’ve heard it it as synonymous with “good,” “new” and anti rich tax policy. Can you make a recommendation? Either just left or, since libertarian is socially liberal fiscally conservative, a good source that is fiscally liberal and socially conservative? I asked the DNC for the former and just got on their mailing list. Not impressed.
The US “left” is considerably to the right of the European left, and LW has a broad international readership, so I think just saying “left” would be more confusing (“liberal” would even more confusing, given the dispute between libertarians and progressives over who is the legitimate heir of 19th century liberalism). But yes, in this case, I meant progressive in the sense of “mainstream center-left.”
Some of the US “left” (notably, the mainstream Democrats) are considerably to the right of the European left. “Left” encompasses a rather large landscape.
Right right, thanks. Any source you’d recommend?
Of course a progressive will think that progressivism is good, and part of progressivism is that it is good becuase it is new (the clue is in the name). Those who are not progressives will hardly agree. And anti rich tax policy is a straightforward left-leaning policy.
It is tempting for progressives to define the word to mean “good” and “new”, as it saves them the trouble of defending the ideology. The ideology can then be treated not as any set of beliefs about the reality, but as reality itself.
No, that’s not it. It doesn’t mean you can’t have new things happen that are bad. It does refer to a time derivative, but it’s more of a goal than a statement of fact: government and society are not as good as they could be, and we can engineer the government to improve both. That’s ‘progress’. (Note: this summary is not an endorsement)
Progressive tax structures are not named so due to this time derivative. They are named so due to the derivative in income. Regressive tax structures exist, but they aren’t named so due to being more like the past.
That is progress, but that is not what is meant by “progressive” in the political sense. The belief that government can be engineered to improve things is shared by everyone except those in despair of it ever happening. Moldbug has proposals to do that—is he a “progressive”?
No, “progressive” means certain specific views about what is valued as an improvement, and specific beliefs about what policies will make those improvements. These values and views are accurately summarised as “left-leaning”.
A lot of libertarians would beg to disagree there.
I thought about that, but I decided that reducing the government and doing away with it counted as engineering the government. For the libertarian, the task is complete not when there is nothing more to add, but when there is nothing more to take away.
Yes, there are specific things it’s aiming at. I was justifying the word choice. And either way we’ve moved past the ridiculous notion that it is good because it’s new. If you’re going to try to correct me for being overly general you can at least own up to having been far more overly general just a few hours previously.
These days, how many of the people who call themselves progressive think that GMO’s are really great because they are new technology?
Half a century ago progressives really liked nuclear power because of the hope that it brings wealth. These days not so much.
As someone else already pointed out, “progressive” doesn’t mean “approving of all new things” (and in the context of taxation it’s only a verbal coincidence that progressive politics tends to go with liking progressive taxation). Having said that, and in full awareness that anecdotes are little evidence: Hi, I’m a political progressive who has no objection in principle to GMOs and thinks we should be moving to nuclear power in a big way. (I have some incidental concerns about GMOs; e.g., they interact with IP law to provide exciting new ways for unscrupulous corporations to screw people over, which is a pity.)
I don’t think it’s a coincidence that progressives around 1900 called the method of taxation they favored progressive taxation.
I haven’t said something about objections in principle, my statement was much weaker.
More to the point, I expect that a bunch of people on LW are pro-new-technology but that’s not true for the average left person and pretending that being pro-new-technology is something that’s an essential feature of progressive thought in the 21st century ignores the political realities.
On the other hand it was an essential feature of progressive thought 50 years ago. In Marx idea of history, it’s a natural law that history moves in the right direction.
The OED’s earliest citation for the term “progressive” in reference to taxation is from Thomas Paine’s “Rights of Man” in 1792. Its first citation referring to a person who favours political or social change or reform is from 1830. It’s possible that the latter meaning is older than 1792 (explanation on request) but, to say the least, it doesn’t appear that the term “progressive” as a description for taxation systems that tax richer people more dates from “around 1900″ or was chosen by people who identified themselves as “progressives” in anything like the modern US sense.
I agree. I rather doubt that anyone—at least anyone using “progressive” in its current US-political sense—actually thinks otherwise, despite RichardKennaway’s remark above. (In any case, it seems clear from what he wrote that he doesn’t himself identify as progressive, and his description of progressives’ thought processes doesn’t appear to be the result of a serious attempt to understand them sympathetically.)
Google NGram does show an uptick over that time period for “progressive taxation”. It’s the time known as the Progressive Era
Have you read Moldbug? I do think that Moldbug argues that progressivism is about favoring the new. Cthulhu always swims left.
On LW there are a bunch of people that don’t actually agree with Moldbug about wanting to reinstate monarchy but who still accept Moldbug way of thinking about issues. It’s the problem with history. Moldbug tell his history about the progressives of the progressive era and then proclaims that today’s left thought (the thought of the cathedral) is the same.
So much the worse for Moldbug, at least if he makes a strong claim along those lines rather than something weaker and less controversial like “people who identify as progressive tend to be more positive about new things than people who identify as conservative”.
But I haven’t devoted a lot of time or thought to Moldbug, or to neoreaction generally.
I’m slightly lost track of what, if anything, we are actually disagreeing about here. I think it may at this point simply be about why various words have the definitions they do, which probably isn’t something that’s worth putting much further effort into.
You said you doubt that anybody thinks otherwise. I wanted to illustrate that there are people who do think otherwise. That’s means talking about the issue matters.
Sorry, I wasn’t clear enough: What I’ve largely lost track of is what “the issue” actually is. I do understand that at this particular point in the thread we’re talking about whether and to what extent progressivism is about liking new things. But I’ve forgotten (and haven’t much motivation to go back and figure out) why—if at all—that question is relevant to anything that matters. I’m pretty certain (and I’d guess you agree) that on the whole being a “progressive” (in the sense in which that term’s used in present-day US politics) is about other things more than it’s about liking new things.
Understanding the political thought of the last few decades is useful and showing preconception to be wrong is also useful. Particularly it’s useful to understand that the relationship of self identified progressives towards liking new things changed in the last 50 years.
I confirm that this is accurate.
And I stand corrected that the virtue of newness in progressive thinking has got old, while the word “progressive” persists. What do they think of “progress” these days? “You can’t stop progress” was the saying back then. I haven’t heard it uttered seriously for a long time, and if it’s said at all, it’s more likely to be as a criticism of the opposite side by imputing it to them. First relevant Google hit here.
I’m fairly sure the majority of LW regulars who identify as progressives (myself included) would agree with these views about GMOs and nuclear power. However, I’m also pretty sure this is not true of the progressive movement at large, sadly. This is particularly frustrating because these two technologies are probably the most promising tools currently available for solving the problems many progressives purport to care most about.
Amen. Just saying I’ve heard that use from other moderates as well who don’t think too hard about it.
Anyway, the other question is the more interesting to me. Any good left-leaning or socially-conservative-fiscally-liberal (short name?) news source?
Short name = Christian.
“Christian” covers a lot of ground. That’s a fair description of the mainline Catholic viewpoint, but looking up a random Christian news source in the US could get you fiscal viewpoints ranging from lukewarm left to hardline right to more or less apolitical.
(It’s reliably socially conservative, though, generally speaking.)
Depends on the church.
I honestly had not considered a Christian news option.
That comes with some theological baggage, of course. You don’t want a news source that interprets everything in terms of the end times and looks forward to a nuclear war to annihilate the damned.
I’ve heard good things of the Christian Science Monitor (which obviously has even more questionable baggage), but I haven’t read it myself. Also Al Jazeera, which has other baggage (owned by a government), and which I also haven’t read.
Try reading it. Despite the name it doesn’t have an obvious Christian Science Bias. Although I’ve heard it is running into financial problems due to a principled refusal to resort to clickbait and fluff stories.
CSM is very well-regarded.
When I was in college, I took a class taught by the head of the polisci department—Cuba-loving socialist type—who had a habit of recommending it during lectures.
Sure, but all news sources come with some baggage—mostly ideological, sometimes theological, and often enough just batshit crazy. That’s why you don’t want a news source, you want lots of them.
The American Conservative is definitely socially conservative and, if not exactly fiscally liberal, at least much more sympathetic to economic redistribution than mainstream conservatism. But it is more composed of opinion pieces than of news reports, so I don’t know if it works for way you want.
As others suggested, Vox could be a good choice for a left-leaning news source. It has decent summaries of “everything you need to know about X” (where X = many current news stories).
Thanks!
Any particular reason you didn’t make a similar reply to Christian’s suggestion of the ideologically progressive vox dot com?
Because I hadn’t seen it.
I find the implied accusation of bias amusing. I’ve actually tweeted at Matt Yglesias once to complain about the quality of an article on Vox.
Instapundit is highly ideological libertarian, so you should balance it out with a reactionary news source like Theden.tv or Steve Sailer.
As it happens I also read Steve Sailer, although he isn’t so much news as editorial cometary whereas instupundit is more “list of headlines” of the kind sixes-and-sevens was asking about.
According to the efficient market hypothesis index funds should be the best way for the average person to gain a return from investment. Now there is a plethora of indices to invest in. How should one find the ‘best’ one?
Further, only a relatively small part of return generating assets are captured in publically tradeable assets. What about private equity and real estate, huge parts of the economy?
Funds take a fraction of the earnings out, as management fees, and you want the fund that charges the lowest such fees. The early retirement blogs I read seem to agree on Vanguard being the best choice, at least in the US.
IIRC real estate prices in the US rise about 1% per year inflation adjusted while stock markets rise about 7 % on average. An average person needs a huge loan to invest in real estate and go all in which means zero spread of risk. Real estate is also relatively illiquid not only because of practical reasons but because the return of investment depends on timing of the transaction. You’re shit out of luck if you need money while the price of your house is plummeting.
Depends on your risk tolerance. The bigger the index, the lower the risk and the lower the possible returns, generally. Also bigger index funds are usually more liquid. Transaction costs matter quite a lot unless you have a big lump sum to invest, and even then you should consider dollar cost averaging.
That’s not true. It’s easy to get exposure to real estate through REITs. For example, through my wealthfront.com portfolio, I’m invested in Vanguard’s US REIT ETF, VNQ.
I stand corrected.
YRC. I thought you were forgetting to adjust the stock market returns for inflation, so I went to hunt for more accurate numbers, but apparently 1950-2009 S&P500 inflation-adjusted returns (counting not just price rise, but dividends) averaged to 7% per year.
Thanks. If you care about transaction costs you should probably invest in funds that reinvest dividends automatically.
There is also real estate taxes just for holding the asset and upkeep expenses too! But to be fair, asset appreciation isn’t the only return on real estate, many investment properties are income producing assets. But then again you can just get that exposure from REITS anyway.
I an efficient market the expected value wouldn’t be all that different between options, so base it on your risk management preferences.
It Ain’t Necessarily So: Why Much of the Medical Literature Is Wrong
Some of the material will be familiar, but there are examples I hadn’t seen before of how really hard it is to be sure you’ve asked the right question and squeezed out the sources of error in the answer.
What follows is what I consider to be a good parts summary—if you want more theory, you should read the article.
....
I guessed at a seasonal effect, but Gemini and Libra aren’t adjacent signs.
I didn’t realize that the false negative effect (not seeing a relationship when there actually is one) is higher than the false positive rate. This might mean that a lot of useful medical tools get eliminated before they’can be explored.
Also (credit given to Seth Roberts), if a minority of people respond very well to a treatment being tested, this is very unlikely to be explored because the experiment is structured to see whether the treatment is good for people in general (actually, people in general in the group being tested). This wasn’t in the NEJM piece.
....
::::
An interesting type of information bias is the ecological fallacy. The ecological fallacy is the mistaken belief that population-level exposures can be used to draw conclusions about individual patient risks.[4] A recent example of the ecological fallacy, was a tongue-in-cheek NEJM study by Messerli[19} showing that countries with high chocolate consumption won more Nobel prizes. The problem with country-level data is that countries don’t eat chocolate, and countries don’t win Nobel prizes. People eat chocolate, and people win Nobel prizes. This study, while amusing to read, did not establish the fundamental point that the individuals who won the Nobel prizes were the ones actually eating the chocolate.[20]
On the other hand, if you want to improve the odds of your children winning a Nobel, maybe you should move to a chocolate-eating country.
.....
Remembering that humans aren’t especially compliant is hard.
From reading Guinea Pig Zero: The Journal for Human Research Subjects—human beings are not necessarily going to comply with onerous food regimes. I expect that most who don’t simply don’t want to, but the magazine had the argument of not wanting to comply because the someone who’s a human research subject is never going to be able to afford treatment based on the results of the research.
An interesting paper. The abstract says:
Ungated version?
Learn to use google scholar
I don’t know of one.
I was this moment moved to search for the origin of a certain quote, and the process described in that paper seems to apply quite well to the promulgation of wrong citations. Here’s a history of the idea of “three stages of truth”. Actually, the situation for citations is even worse. The doctors in the example of the paper are observing their own outcomes as well as copying their predecessors’ decisions, but someone copying a citation may make no observation of its accuracy.
More generally, memetic propagation.
I have a notion that an FAI will be able to create better friends and lovers for you than actual humans could be. Family would be a more complex case if you value the history as well as the current experience.
I’m not talking about catgirls—if some difficulties in relationships are part of making relationships better in the long haul, then the FAI will supply difficulties.
If people eventually have relationships with FAI-created humans rather than humans generated by other means, is this a problem?
See also EYs Failed Utopia #4-2
I’m not sure we can extrapolate this currently. If we knew more, thought faster… maybe.
For me this means that one contraint on FAI is that it may not perform changes arbitrarily fast. Too fast for humans to react and adapt. There must be a ‘smooth’ trajectory. Surely not the abrupt change suggested in Failed Utopia.
You’ve asked that before.
I don’t have any new thoughts on this question, so I’ll just quote my answer from there:
I thought that was already part of catgirls?
What’s a catgirl?
An indistinguishable-from-live sex toy.
With cat-ears.
Let’s first separate sexual aspects from the need for other companionship. Suppose everyone gets their sexual needs, if any satisfied by catgirls+ (+ for the upgrade which includes relationship problems if necessary). If you have a crush on your coworker (or your sibling, ew!), just add a catgirl copy of them to your harem.
Further suppose that the reproduction aspect is also taken care of.
Now you have a race of essentially asexual humans, as far as human-to-human interactions go.
The question is, does it make sense to have friendbots? What, if anything, is lost when you switch from socializing with meat humans to socializing with simulated ones?
It’s not self-evident to me that they are separable.
When my heterosexual male friends tell me companionship isn’t about sex I ask them how many male companions they’ve had. Not many, I’ve gathered from the silence.
For hetero males the usual term for male companions is “close friends”. I bet the great majority have some.
But go ask some hetero women whether they think sex and companionship are well-separable :-/
Also I get the feeling 21th century Americans have fewer close friends than the historical human norm.
I don’t know what the “historical human norm” is and I suspect there is a lot of variation there.
Try reading literature written before the past 50 years and preferably before the 20th century. That will give you an idea.
I am afraid Victorian England is not all that representative of the historical human norm.
I wasn’t primarily thinking of Victorian England. Also “before the 20th century” isn’t just the 19th century.
In Finnish the connotations of “companion” are more obviously sexual I see, at least in my circles.
It’s probably a language issue, in standard English the word “companion” has no sexual overtones.
More to the point, this subthread is explicitly about separating sex from companionship.
Ah, but it’s quite likely that they’re heteroromantic as well as heterosexual.
Perhaps, but why haven’t I come across any homoromantic heterosexuals or heteroromantic homosexuals?
AFAIK people with mismatched romantic and sexual orientations, though very much existent, are quite rare and the -romantic terms are most often used by asexual spectrum people to describe their romantic preferences.
Asexuals with romantic orientations came across my mind too. I can’t imagine romantic and sexual orientations as separate, but the stakes aren’t high enough for me to commit the typical mind fallacy so I’ll keep my mind open to the possibility :)
This strikes me as superstimulating. In particular, the more cat girls you have, the more and kinkier cat girls you want.
Not necessarily, Plenty of people are happy with vanilla sex (or without). I suspect that even the kinkiest ones out there also have their limit. If not, let’s talk about those who do.
That’s because vanilla sex isn’t as stimulating. The more superstimulating something is, the more experiencing it causes you to want more of it.
For people who are into one or another variety of kink, or would be if only they knew about it / were prepared to try it. I don’t think it’s obvious that that’s everyone.
That doesn’t seem to be the case, see e.g. yummy food.
I think you’re confusing “stimulating” and “addictive”.
That “explanation” is easily falsified. There are plenty of people who tried kinkier sex, enjoyed it, but reverted back to vanilla. There are plenty of people who tried roller-coasters once or twice but decided it’s too much “stimulation”.
Different people have different thresholds. If I remember the study correctly, none of the rats that tried directly stimulating their pleasure center ever went back.
Rats != people...
Yes, well it would be unethical to repeat that experiment with people.
People, however, (as shminux said) do try kink all the time. It would not be unethical to do a study on people who are already kinky and see if they get kinkier over time.
Anecdotally, they start doing kink, they either decide it isn’t for them and stop, or they do get kinkier for a while—because they’re exploring what they like and it makes sense to start at the less extreme end of things.
Then they figure out what they like, which is often a range of things at differing levels of ‘kinkiness/extremeness’, and do that.
I mean, it’s almost trivially obvious that compared to the size of the kink community, there is an almost negligible amount of people doing the human equivalent of directly stimulating their pleasure centres to the exclusion of everything else. They tend to make the news. The moderately kinky majority do not.
Well, there have been experiments on humans. http://en.wikipedia.org/wiki/Pleasure_center#Human_experiments
This looks to be wireheading lite and if you got there I don’t see why you wouldn’t make the next step as well—the FAI will create the entire world for you to enjoy inside your head.
I thought wireheading meant stable high pleasure without content rather than an enjoyable simulated world. What do other people think wireheading means?
Well, technically the term “wireheading” comes from experiments which involved inserting an electrode (a “wire”) into a rat’s pleasure center and giving the rat a pedal to apply electric current to this wire. So yes, in the narrow sense wireheading is just the direct stimulation of the pleasure center.
However I use “wireheading” in the wide sense as well and there it means, essentially, the focus on deriving pleasure from externally caused but internal experiences and the lack of interest in or concern with the outside world. Wireheading in the wide sense is, basically, purified addiction.
If we’re living inside an FAI, “outside world” might be getting a little vague. This might even be true if we’re still living in our DNA-based bodies.
Do you think an FAI would let people have access to anything it isn’t at least monitoring, and more likely controlling?
Uploads/ems are a bit of a different case.
I don’t know, but in such a case I probably would not consider it a FAI.
How? Why does it matter in what substrate the information pattern called you resides in this case? I doubt the meat brain will have any connectibility issues once we have uploads.
I am not an information pattern having, for example, a considerable somatic component :-D
Depends. You could have a robotic somatic component, or a human body grown in a vat.
I don’t see much difference between a human body grown in a vat and one grown in a womb.
But, generally speaking, in the context of wireheading the somatic component matters.
Does it matter to you because of semantic or moral reasons? I fail to see any moral difference in living in a virtual world as a meat brain vs living in a virtual world as a silicon brain. The semantic difference is obvious.
It matters for practical reasons. Self as an “information pattern” is an abstraction and abstractions do not exist in reality.
Do fluids and solids exist in reality?
Things with particular properties exist in reality, their categorization (e.g. into fluids and solids ) does not.
I suppose brains or selves don’t exist in reality either. I’m not sure what we’re getting at here. So where are categories then, if they don’t exist in reality?
Brains certainly do :-)
In your mind.
I’m pretty sure brain is a category too. Certainly more so than fluid or solid.
I’m not sure how spending time with a lover counts as a lack of interest in the outside world, even if the lover had come into existence via an unusual route.
I say it’s not a problem, but my views are outside the LW mainstream on this.
Depends on what the machine has optimized for. I’m not convinced that many definitions of better friends or lovers are vital optimization goals, or even good ones, in themselves. It’s quite easy to imagine a set of relationships that trigger every desirable stimuli trigger an individual enjoys, complete with short-term difficulties if necessary, but leaves the victim trapped in a situation where his or her preferences remain at a local optima or are otherwise Not Correct by some grander standard.
Interaction with external minds and external situations not built toward you seem like very important parts of jostling folk out from such environments. Better optimization goals might do that, but it’s not an assumption you can easily take.
I’d argue that non-catgirl created beings are people (tautologically), and while relationships with artificially-produced people is fine itself, there are also some possible ethical issues with creating minds optimized for better relationships for certain people, as well, though they’re likely outside the scope of this thread (energy efficiency compared to sorting existing minds, harmful desires, House Elves).
If you liked Scott Alexander’s essay, Meditations on Moloch, you might like this typographic poster-meme I made. It was a minor success on Facebook.
(If you haven’t read Scott Alexander’s essay, Meditations on Moloch, then you might want to check it out. As Stuart Armstrong said, it’s a beautiful, disturbing, poetical look at the future.)
I don’t understand… The point of the essay is that one should not anthropomorhize Moloch, and your meme does exactly that.
There is the line “thinking of the system as an agent throws into relief the degree to which the system isn’t an agent” so I see what you mean. But I think that just means that there’s no sane agent to deal with, no law of the universe that says we can appease Moloch in exchange for something.
But anthropomorphizing Moloch, perhaps poetically, is different, and there’s plenty of anthropomorphizing Moloch in the essay:
“But if we have bound Moloch as our servant, the bonds are not very strong, and we sometimes find that the tasks he has done for us move to his advantage rather than ours.”
“We will break our back lifting Moloch to Heaven, but unless something changes it will be his victory and not ours.”
“In the very near future, we are going to lift something to Heaven. It might be Moloch. But it might be something on our side. If it is on our side, it can kill Moloch dead.”
“Moloch is exactly what the history books say he is. He is the god of Carthage. He is the god of child sacrifice, the fiery furnace into which you can toss your babies in exchange for victory in war. He always and everywhere offers the same deal: throw what you love most into the flames, and I will grant you power. As long as the offer is open, it will be irresistable. So we need to close the offer. Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.”
My frail human mind is more motivated by war on a hated enemy than by abstractly maximizing utility, so I like the idea of frustrating a raging Moloch.
In the interest of trying out stuff outside the usual sphere-of-things-that-I’m-doing, I now have a fashion/lifestyle blog.
It’s in Finnish, but it has a bunch of pictures of me, which ought to be language-neutral. Also my stuffed animals. (And yes, I know that I need a better camera.)
Hi. I’m Portuguese and live near Lisbon. Are there any LWers out there that live nearby?
The latest survey (2013) shows zero people living in Portugal, and so I feel a bit lonely out here, especially when I read the locations for the LW meetups. They seem so close, only not really...
I guess I could make an effort to start my own meetup in Lisbon or something, maybe, I don’t know. I am a little shy and I don’t think I am capable of starting something like that on my own.
I work in academia, in the field of computer science, and thus am surrounded by people that would find this website appealing. I have in fact introduced this site (sometimes subtly) to some people I know, but haven’t seen anyone taking the time to read the Sequences and get in sync with this community. I want to try harder, though.
What would you say is the most effective way of capturing the interest on this site? My tools are Facebook, and the chance to make a presentation about anything I want at the University and getting an audience of at most 30 people.
Getting people to read HPMOR is easier than getting them to read the sequences.
I remember seeing this organization on LW but cannot find it again or remember the name: it was a for-profit school-like entity that does a short training program (might have been 6 six week, maybe 3 months, that range), which is free upfront and takes their payment entirely as a percentage of the salary from the job they place you in afterward. If I remember correctly, it is run in the Bay Area and took a small pool each session, with a school-like application process.
Can anyone point me to this?
That sounds like App Academy or one of its competitors.
App Academy was the one I was thinking of specifically, thanks.
I have yet to find any thoughts on Effective Altruism that do not assume vast amounts of disposable income on the part of the reader. What I am currently trying to determine are things like ‘at what point does it make sense to give away some of your income versus the utility of having decent quality of life yourself and insuring against the risk that you end up consuming charitable resources because something happened and you didn’t have an emergency fund’. Does anyone know of any posts or similar that tackle the effective utilitarian use of resources when you don’t have a lot of resources to begin with?
I don’t think there is a general answer to the question “How much should I consume?”
Is this a thing we should be asking if someone who is an expert on Effective Altruism and economics and similar could have a go at answering?
You can ask, but why the answer would be anything else other than someone’s personal opinion?
It’s a straightforward question about personal values. Do you think it’s a good idea to have experts in EA or economics tell you what your values should be?
It’s a straightforward question about personal values. Do you think it’s a good idea to have experts in EA or economics tell you what your values should be?
No, but they might know things like the scale of diminishing returns in terms of spending money on yourself, or at what minimum level of wealth do an acceptable majority of people (in x culture or x country) report being satisfied with their lives?
They might have a personal anecdote about how they earn a million dollars a year and live in a ditch and have never been happier, and they might know the psychological reasoning why some people are happy to do that and some people aren’t.
I mean, yes, it’s true that their answer is not going to be everybody’s. But an attempt to answer the question seems very likely to turn up useful information that could help people make their own decisions.
Putting money into an emergency fund here it can gather interest doesn’t mean that you can’t donate the same money 10 years from now.
I don’t have a link, but I suspect cutting this fine is not very valuable. That last $10k would be a lot to you, but that wouldn’t make it more than any other $10k to a charity. Instead, ask how you could come to have a vast amount of disposable income. Including whether it makes sense to spend some money toward that end. You may be able to get a very high rate of return investing in yourself.
to me EA is more about how to answer the question “how should I be charitable?” than “Should I be charitable and to what extent?”
Most of the EA stuff I’ve seen doesn’t appear to me to assume vast amounts of disposable income; merely enough to be willing to give some away. Then EA is about what to do with your charity budget, whether it’s large or small.
How you prioritize helping others versus helping yourself (and your family, if any) is a more or less orthogonal question.
(I might suggest, snarkily, that for someone who requires “vast amounts of disposable income” before being willing to give any away no term with “altruism” in it is very appropriate. But that wouldn’t be fair because e.g. your intention might be to secure yourself a reasonably comfortable life and then give away every penny you can earn beyond that, or something.)
That’s it, basically; it’s about how much of a buffer I’m ‘allowed’ to give myself on ‘reasonably comfortable’; I’m supporting myself and full-time student partner and not in permanent full-time employment so my instinct whenever I have a sniff of an excess is to hoard it against a bad month for getting work rather than do anything charitable with it (or it all goes on things we’ve put off replacing for monetary reasons, like shoes that are still wearable but worn out enough to no longer be waterproof).
I think Lumifer articulated better than I could what I really wanted to know the answer to, and while there may not be a general answer it does mean that I can at least go looking for things to read now my real question is clearer to me. So thanks!
There are two questions here. The first is how you trade off the value you place on your own welfare vs the value you place on the welfare of distant others. And the second is how having extra cash will benefit your mental health, energy levels, free time, etc. and whether by improving those attributes of yours you’ll increase the odds of doing more good for the world in the future.
I consider myself a pretty hardcore EA; I gave $20K to charity last year. But this year I’m saving all my money so my earning-to-give startup will have a bigger cash buffer. And I spend about $100/month on random stuff from Amazon that I think will make my life better (a weighted jump rope for exercising with, an acupressure mat for relaxing more effectively, nootropics, Larry Gonick’s cartoon guides to the history of the universe so I can relax & educate myself away from my computer, etc.)
So I guess the point I’m trying to make is you don’t even have to deal with the first values question if you decide that investing in yourself is a good investment from a long-run EA perspective. Don’t be penny-wise and pound-foolish… your mental energy is limited and if you find wet feet at all stressful, it’s worth considering replacing your shoes even before personal welfare gets added in to the equation.
In other words, I personally am more optimistic about you spending all of your money on yourself and spending some of your time and energy on a credible plan for significantly increasing your future EA impact than I am about you donating spare cash to charity and not spending any time and energy and such a credible plan. (In general, I suspect that the potential EA impact of time and energy is underrated; this article gives a good explanation.)
Thanks for the link; very helpful and interesting.
I don’t think “allowed” is the right way to think about it (and your quotation marks suggest that probably you don’t either). If you mean something like “what position do other reasonable people take?” or “what is the range of options that won’t make other people who think of themselves as EAs disapprove of me?”, I have no information on anyone else’s positions but my own is something like this:
If you are having difficulty feeding yourself healthily, paying for somewhere to live that isn’t falling down, etc., then feel free not to give anything.
Otherwise, I think it’s psychologically valuable to keep up the habit of giving something, even if it’s very little.
If you expect to be substantially better off in the future than you are now, there’s a lot to be said for optimizing lifetime income and aiming to give some fraction of that rather than feeling guilty about not giving a lot now. If hoarding some for now gives you more stability later, that’s probably better for everyone.
Once you’re reasonably comfortable financially, I think the traditional figure of 10% is a reasonable benchmark; you can answer the question “10% of what?” in various different ways, all somewhat defensible and leading to substantially varying levels of giving.
There are people who give quite a lot more. No one will think ill of you for not being one of them.
Thanks gjm, that’s a really helpful comment. (And yes, quotation marks indicate ’this is the word I can think of but it is not necessarily the right word.)
I think points number 1 and 3 are especially relevant for me right now, and I have found talking it through on here to be very helpful in defeating an entirely non-useful lingering sense of guilt for not giving more when I really can’t afford to, yet.
How do you think saving (in the standard financial sense) and giving should be balanced?
Um. With careful consideration?
Seriously: I don’t have any very strong opinions on this, nor any reason to think that anyone should care what my opinions are. In my own family’s case, we save substantially more than we give (very crudely, about 50% of income versus about 10%) but I’m not at all sure that if I thought about it longer and harder I wouldn’t conclude that we should be weighting global welfare higher relative to our own financial security.
:-) Don’t mean to pick on you, but the impression I get from EAs on LW is that your free cash flow is supposed to go save the world and I got curious about the apparent/potential disconnect from the general meme that people are supposed to save more so as not to be a drag on the society if something happens to them or when they retire...
The idea that your free cash flow should all go to save the world is generally based on a pretty straightforward utilitarian calculation, and it seems pretty clear that the same calculation would put saving lives in poor countries ahead of the small adverse consequences of drawing more on one’s own country’s social safety net. So I don’t think there’s much “disconnect” there.
In practice, very few people are quite so heroically altruistic as to reduce themselves to (what locally passes for) poverty so as to give everything to help the global poor. I bet the few who are are already largely neglecting saving; the rest of us, I think, first decide how much we want to give away and then how we want to balance saving and consumption. So a tradeoff between saving and giving, as such, doesn’t arise.
For the avoidance of doubt, i very much don’t think of myself as any sort of heroic or expert altruist (effective or otherwise). My only role here is Some Guy Who Got Into A Thread About Effective Altruism :-).
Do you have a link? I’m just not sure that it’s that obvious that pumping my (hypothetical) money overseas is a utilitarian good if I end up costing my own society more than I give away (which is pretty likely—to use a US example, hypothetical-me might end up costing orders of magnitude more to treat in an emergency room when I get sick because I didn’t spend my own money on preventative healthcare).
Obviously the money hypothetical-I save the government isn’t automatically going to go to good causes, but by doing my bit to make the society poorer, am I reducing people’s overall tendency to have extra money to give away?
I dunno, probably need an economist and a lot of time to properly answer that question...
Nope. Just a lot of handwaving. Sorry.
But, e.g., if you get old and sick and it costs $100k to cure you in the USA, then the utilitarian optimum is probably to let you die and send the money to save 20 or more lives in sub-Saharan Africa. For the avoidance of doubt, I am not suggesting that you should endorse that policy; but if you bite the bullet that says you should send all your spare money to Oxfam or GiveDirectly or whoever, then I think you should probably also be biting the bullet that says you should be prepared to give up and die if you get sick and curing you would be too expensive.
On the other hand, if you’re young and get similarly sick, it might (on the same assumptions) be worth curing you so that you can carry on earning money and pumping it to the desperately needy. In which case it might indeed be worth spending some money first to stop that happening. But I’ll hazard a guess that the amount you need to spend to make it rather unlikely that you lose a lot of income because of ill health isn’t terribly large.
I suggest that unless you’re seriously inclined to really heroic charitable giving, you would do better not to worry about such things, and take decent care of yourself as I’m sure you would rather do, and give generously without impoverishing yourself. Especially at present—if you don’t have a lot of money, the difference between heroism and ordinary giving is going to be pretty small. Once you’re in a better situation financially, you can reconsider how much of a hero you want to be.
Well another complicating factor—in my particular case—is that with chronic and especially mental health conditions, it’s actually very difficult to separate ‘preventative healthcare’ from frivolous spending. A lot of the things someone with my mental health might buy and do to keep them sane doesn’t look like healthcare spending at all. A lot of things that it is considered normal and even laudable to sacrifice for one’s education or career, especially when the latter is just beginning, such as sufficient sleep and leisure time, non-work-related social contact, etc are actually things where an insufficiency over more than a week or so will worsen my condition.
So you end up with people with conditions like mine spending money on things like ordering out to save time and energy, hiring help with the housework, paying frequently for travel to see friends—and it’s not clear, even to the person whose life it is, how much of that is sanity preservation and how much is just nice to have (and how much, if any, is nice-to-have but you tricked yourself into believing it was sanity-preservation).
But that’s a far more complicated question that I’m not going to ask people here to even attempt to answer.
Oh, if you have an already-existing condition—whether “physical” or “mental”, whether obvious at a glance or subtle and hidden—then of course it’s far more likely that there’s stuff you need to spend to keep yourself functioning well. I don’t think any reasonable person, inside or outside the EA movement, would have any objection to that. Even from a pure bullet-biting maximize-cash-flow-to-Africa perspective, you almost certainly do better to keep yourself functioning rather than giving everything you can in the short term and collapsing in a heap.
[EDITED to add: If whoever downvoted this is reading, I’d be interested to know why. I’m wondering whether I accidentally said something terribly insensitive or something.]
I am not sure this is the case. Saving and giving are kinda fungible without any immediate impact on you—that’s got to be tempting...
I like the notion of the Superintelligence reading group: http://lesswrong.com/lw/kw4/superintelligence_reading_group/. But the topic of AI doesn’t really interest me much.
A reading group on some other topic that is more along CFAR’s lines than MIRI’s would. For example, reading recent studies of cognitive bias would be interesting to me. Discussion on how practically to combat them might evolve from discussing the studies.
Max L.
I would be up for it.
I seem to have high karma, but don’t know why. Looking through my contribution history, I seem to only have a total of 47 net upvotes on anything I’ve ever posted, but have 74 karma points, including 10 in the last 30 days. Looking at the LW wiki FAQ, it says that you can get 10 karma per upvote if you post in main, but I haven’t done that. Does anyone know why this might be happening?
I seem to have picked up 30-40 karma I can’t account for over the past week. I wondered if it was some effort to undo the efforts of identified mass-downvoters.
I have also noticed a bit of a spike, but if it were a de-euginiering effort, the change would be 1000+ points for me, not 30-40. So probably something else is going on.
That’s probably not it, given that I was one of Eugine’s identified victims, and my karma has not changed in > 30 days.
Also, here is the discussion from the previous OT.
That doesn’t seem likely in my case, since the only non-meetup things I’ve posted before today have been about the MWI and Scott Aaronson’s take on integrated information theory.
I’ve had a similar thing. Prior to that I had a bunch of −1s that arrived all at once and which I considered unjustified.
So you’re saying there’s a mass up-voter lurking out there? :) (If so I may have been the object of their efforts too.)
See the last open thread—several people (including me) have been victims of a mysterious mass upvoter who has gone through everything we’ve posted in the last month or so adding +1s indiscriminately up to about +30 to +40.
Personally I suspect it is a psychological experiment, examining the difference in reaction between this mysterious mass upvoter and the recent mysterious mass downvoter. Or, yknow, a troll.
Or someone who read one of your comments, like it, and then read others and liked them as well.
I was thinking more mod/admin efforts. I’ve never apparently been the victim of mass-downvotes, but if there was an intervention to remove all downvotes made by a troublesome user, the sudden removal of all their cumulative downvotes over time might look like an unexpected karma boost.
Recently the discussion on modeling a better version of yourself was moved to Main. Had you made any comments there?
No, but I just realised that everything adds up if I assume that meetup posts also get 10 karma for every upvote. Given that this sort of makes sense but that I can’t find it mentioned anywhere, I’m not sure whether it’s a feature or a bug.
It’s an undocumented feature.
Mystery solved!
Since 23andme has been prohibited from giving health-related genetic reports, is there anyone else (outside the FDA’s jurisdiction) who provides similar services?
Edit: I have found Promethease, which works with 23andme’s raw data. I’m still interested in additional options.
Edit2: This page lists various 23andme competitors, although it was last updated in early 2013. More recent information is appreciated.
You can download your raw SNP-call data from them and run it through a plethora of third-party programs. Won’t have the slick interface or the collation of multiple SNPs that affect the same trait but definitely tells you what you want to know about Mendelian diseases and you can sift through the rest.
See http://www.23andyou.com/3rdparty for some of the tools.
Thanks a lot!
I just did a tried to do a Fermi calculation on the value of getting a fire-proof, theft resistant document safe, but can’t find a good number for the cost of identity theft. Does anyone have one on hand?
I don’t, but the cases of identity theft I hear about in the news aren’t done by entering someone’s home to acquire their papers. What scenarios are you intending to defend against with the safe?
I’m not sure, basically I hear a lot of vague references to it being good to have such a safe, but can’t figure out what if anything it is actually important for.
I’ve never been entirely sure about the whole “it should all add up to normality” thing in regards to MWI. Like, in particular, I worry about the notion of intrusive thoughts. A good 30% of the time I ride the subway I have some sort of weak intrusive thought about jumping in front of the train (I hope it goes without saying that I am very much not suicidal). And since accepting MWI as being reasonably likely to be true, I’ve worried that just having these intrusive thoughts might increase the measure of those worlds where the intrusive thoughts become reality. And then I worry that having that thought will even further increase the measure of such worlds. And then I worry...well, then it usually tapers off, because I’m pretty good at controlling runaway thought processes. But my point is...I didn’t have these kinds of thoughts before I learned about MWI, and that sort of seems like a real difference. How does it all add up to normality, exactly?
Whatever argument you have in mind about “the measure of those worlds” will go through just the same if you replace it with “the probability of the world being that way”. You should be exactly equally concerned with or without MWI.
The question that actually matters to you should be something like: Are people with such intrusive thoughts who aren’t generally suicidal more likely to jump in front of trains? I think I remember reading that the answer is no; if it turns out to be yes (or if you find those thoughts disturbing) then you might want to look into CBT or something; but MWI doesn’t have anything to do with it except that maybe something about it bothers you psychologically.
Okay, fair enough, forget the whole increasing of measure thing for now. There’s still the fact that every time I go to the subway, there’s a world where I jump in front of it. That for sure happens. I’m obviously not suggesting anything dumb like avoiding subways, that’s not my point at all. It’s just...that doesn’t seem very “normal” to me, somehow. MWI gives this weird new weight to all counterfactuals that seems like it makes an actual difference (not in terms of any actual predictions, but psychologically—and psychology is all we’re talking about when assessing “normality”). Probably though this is all still betraying my lack of understanding of measure—worlds where I jump in front of the train are incredibly low measure, and so they get way less magical reality fluid, I should care about them less, etc. I still can’t really grok that though—to me and my naive branch-counting brain, the salient fact is that the world exists at all, not that it has low probability.
I have always taken “it all adds up to normality” to mean not “you should expect everything to feel normal” but “actually, when you work out the physics, all this counterintuitive weird-feeling stuff produces the world you’re already used to, and if it feels weird then you should try to adjust your intuitions if possible”.
I’m not sure there’s much I can say to help—it’s clear from your comments that you understand in theory what’s going on, and it’s just that your “naive branch-counting brain” is naive and cares about the wrong things :-).
Maybe this will help: Suppose you’re visiting a big city. Consider the following two propositions. (1) There is one person in this city who would cheerfully knock you on the head and steal your wallet. (2) Half the people in this city would cheerfully knock you on the head and steal your wallet. I don’t know about you, but I would be really scared to learn #2 and totally unsurprised and unmoved by #1. Similarly: “there are branches in which you jump in front of the train”—well, sure there are, and there are branches where I abruptly decide to declare myself Emperor of the World and get taken off to a mental hospital, and branches where the earth is about to get hit by an asteroid that miraculously got missed by everyone’s observations and we all die. But there aren’t “a lot” of any of these sorts of branch (i.e., the measure is very small). What would worry me is to find that a substantial fraction of branches (reckoned by measure) have me jumping in front of the train. But what it takes to make that true is exactly the same thing as it takes to make it true that “with high probability, you will jump in front of the train”.
You don’t see other people doing so, and I can assure you many more people than jump have such thoughts. Any MWI weirdness would only affect what you recall of your OWN actions in this case.
Donation sent. !@#% those !@#&!.
EDIT: Oops, wrong place, this was supposed to go under ITakeBets’ post.
A medical issue is a problem if the patient recognises it as one. If a patient suffers from something that is not recognised as medical problem we call it hypochondria. Is there the concept of something we see as medical problem but the patient does not realise as one e.g. because they don’t know that their condition is not normal?
https://en.wikipedia.org/wiki/Anton%E2%80%93Babinski_syndrome
https://en.wikipedia.org/wiki/Anosognosia
https://en.wikipedia.org/wiki/Anosodiaphoria
https://en.wikipedia.org/wiki/Somatoparaphrenia
https://en.wikipedia.org/wiki/Confabulation
Terminology regarding missing symptom awareness depends on what is thought to be the cause. Anosognosia and other agnosias would be used for neurological disorders where self-monitoring is specifically impaired while denial, delusions and hallucinations would be used for psychiatric disorders. Denial could also be a psychiatric symptom concerning a somatic disorder. I’m not sure if other somatic fields than neurology have special terminology.
Not really. For example grief is not recognized as a medical problem, people suffer from it and we don’t call it hypochondria.
Hypochondria is excessive worry about having a serious illness.
ETA: I think that whatever we choose to call a medical problem largely depends on our values and mere diversion from the biological norm does not a medical problem make. So the hypothetical patient could also simply disagree with others about what constitutes a medical problem.
I think the standard terminology is “undiagnosed illness”.
Here are two bookmarklets that have really helped my article-reading workflow. I named the bookmark for #1 “Clean” and #2 “Squirt”:
#1 is a nice, simple frontend for the Readability API. Just enter the URL of a page with something you want to read on it, and it extracts the content without any sidebars, ads, or other junk and gives it to you in an easy-to-read format.
#2 is Squirt, a speed-reading application that takes the text of any webpage and displays it to you one word at an adjustable speed. The default is 450wpm I think, but after you make an adjustment, it remembers what speed you want for next time. If you need to read a part more carefully or go back because you missed something, that’s easy. Hit the spacebar to pause and it will show the context, then use the left and right arrows to move around. Hit the spacebar again to resume speed reading. Another awesome feature is that it tells you exactly how long it will take to finish if you don’t stop, so you can decide if it’s worth your time or not.
The two work really well together, as squirt alone will sometimes grab text you don’t want. What I do is “Clean” a page by clicking on the bookmarklet, and then sometimes hit “Squirt” to speed read it.
Try it out and let me know what you think!
Which speed are you using at the moment and how long did it take you to come to that speed?
I started at 350 and that’s still what I use most of the time. For light, non-technical articles, I can do 450, but it’s a bit uncomfortable to focus that hard and I do miss things occasionally. I can usually tell if it was important or not though, so I know if I need to go pause and rewind. After playing with speedreeding off and on for a few years, I’ve come to the conclusion that it’s definitely possible to read faster than I normally do with equal comprehension, but that there really is a limit and the claims you see from speedreading courses are hyperbolic. The thing I like about Squirt is that it eliminates the need to use a pacer.
1 didn’t work for me.
For example, trying it on tvtropes.org got me http://tvtropes.org/pmwiki/pmwiki.php/%3Chttp://justread.mpgarate.com/read?url=%3Ehttp%3A//tvtropes.org/pmwiki/pmwiki.php/Main/HomePage
Looks like this is a bug with the way LW parses markdown. You need to remove the angle brackets just inside the quotes.
That fixed it. Thanks.
Hmm… Yeah, that’s not right. Maybe there was a problem when I pasted it? Here it is again.
Only other thing I can think of is you may have a browser extension interfering.
It still doesn’t work. It could be an extension, but I was guessing it was just the browser. I’m using Chrome. javascript:alert(“test”) seems to work if I type it direction or use a bookmark. It doesn’t work if I copy and paste.
Did you try copy/pasting my link into a bookmark? That’s what I was recommending, sorry if that wasn’t clear. When I copy/paste anything starting with javascript: directly into the URL bar, the javascript: part gets dropped.
When speaking about battling ISIS, the alternatives for the West seems to be either air strikes or boots on the ground. Boots on the ground means actual personal. Why isn’t there a version of boots on the ground that’s completely robot based? Why are human bodies still needed for waging intercity warfare?
Considering that this is the state-of-the-art in animal-like robot movement, I can see why we still use meat-soldiers.
Because warfare is complicated? Are you talking about drone robots?
The word drone refer to something that flies. You could miss flying and non-flying robots.
What’s the bottleneck, where robots don’t perform?
Rough terrain
Adverse weather conditions
Dealing with civilians
Going up and down flights of stairs
Taking prisoners
Medical care
Being underground
probably a lot more
Prolonged functioning at high energy levels far from usable energy sources.
To what extend are those issue likely to be resolved in 10 to 20 years to an extend that they change the geopolitical situation?
Not very likely. In 10-20 years we might get a self-driving car which is a MUCH easier problem than a battlefield robot.
Google already has self-driving cars. The issue is more about making them safe enough that they don’t get sued to the ground when the cars get into accidents. Additionally you need to pass laws that make them legal.
Military technology doesn’t suffer from the same hurdle.
Kinda sorta maybe not really.
Dammit, I’ve got to pay more attention to those feelings of “really?” Driverless cars at current levels of tech seemed faintly implausible, but I ignored that in favor of “I keep hearing it in the news” and “google=magic”.
On the other hand, self-driving cars might make sense for slow-moving traffic jams.
Huh, looks like I’ve been fooled by journalists again. Thanks!
On the other hand, they have to drive through terrain that has been intentionally modified to be difficult for their algorithms.
I’d guess that communications are a problem—you’d need more bandwidth to send enough video back to drive a car remotely than to fly a plane, and it’s probably easier to lose contact, too. Not to mention the difficulties of fighting inside a city you don’t want to simply destroy: can your robot open a door and go up a flight of stairs?
This is the kind of thing that’s being researched by the dreaded Military-Industrial Complex, though.
This is where they’ve got to (scroll down to the archive link). It isn’t yet anywhere near good enough for the task.
For remote rather than automonous operation, there would be major humanitarian applications as well, but the technical problems are still huge. There’s latency and reliability of communications, terrain that would be challenging even for people on the spot, dexterity in confined spaces, and the problem of refuelling. None of this is a Simple Matter Of Engineering.
Noise.
A little help communicating some ideas?
Anyone up for beta reading a 2,000 word section of my attempt at an aspiring-rationalist story, S.I.?
I’ve just finished putting together an initial draft of Bunny pontificating about the ideas discussed in https://www.reddit.com/r/rational/comments/2g09xh/bstqrsthsf_factchecking_some_quantum_math/ . I could really use some feedback to make sure I’m having her explain them in a way that’s actually comprehensible to the reader. Anyone who’d like to help me with this, I’ve pasted the initial draft to a GoogleDoc at https://docs.google.com/document/d/1lOQAAM3fdnF2ew7CgBQqSLtk21_4Foa8n7IB3Ay0ze8/edit?usp=sharing , which is set to allow comments.
Are you operating under Crocker’s rules?
Also, if you want strictly writing help, I was recently made aware of the existence of /r/destructivereaders (h/t Punoxysm).
I’ve claimed to operate under Crocker’s rules for some time now—though this might be the first time anyone has invoked that.
I’ll take a look at that subreddit; but at the moment, I’m mainly concerned with figuring out how to best communicate the new ideas presented in the draft (the Lottery Oracle, etc) to the reader (keeping in mind that said reader is going to have gone through roughly 120,000 words of my attempt at rationalist fiction to get this far), rather than any particular grammar or style details that don’t affect that goal.
I’m afraid that /r/destructivereaders doesn’t look like a good place for me. They’re set up so they (just about) require submitters to have previously critiqued multiple other submissions, and I’m already trying to come up with clever ways to make sure I spend so much time each day working on my story, instead of being distracted by all the shiny things on the internet.
From what I’ve looked at so far, it looks like they tend to focus on the basics. I already know that I over-use semicolons, and use sentences that are too long and complicated (and contain multiply nested subclauses (like these)) for many readers’ comfort, and that I’ve been skimping on descriptions which aren’t directly plot-relevant. I don’t anticipate that being told these facts yet again would be worth the time I’d spend critiquing other posts for my entry fee.
If you know, why don’t you solve the issue? Rereading a post and asking yourself for every sentence: “Can I break this down into multiple sentences?” and “Does this sentence really need that many words?” isn’t complicated.
Identifying common errors in the writing of other people is good training to then identify the same errors in your own writing.
I also know that I should arrange my diet to have more fruits and vegetables than it currently does. However, after a few decades of doing things one way which isn’t optimal, but is good enough to get the job done, simply “deciding” to do things a different way isn’t enough to alter all the various unconscious mental sub-units whose interactions lead to the behaviour in question.
I bought a pumpkin pie yesterday instead of a box of cookies. I’ve started using beta-readers. Neither one is a perfect solution—but each is a single-step improvement over the previous situation, a step that is within the range of behaviours I can get my unconscious processes to actually accomplish, and hopefully, isn’t going to be the only step.
While I won’t claim that my writing is very good I think I learned to handle the issue of writing sentences that are too long. It just takes a decision to allocate some time to reviewing your own writing and then looking at every sentence.
I don’t do that for everyone of my LW posts, but if I would write a blog or fiction I would.
Done. See comments by name Carlos.
Thank you kindly. :)
I’ve accepted most of your suggestions, and responded to the ones that I haven’t.
I’ve had several unexplained jumps in karma over the last few days, amounting to around 80-100 points. Someone else mentioned the same, and I believe it’s happened to quite a few people. If that’s a side effect of reverting the votes of systematic downvoters, fine, but if we now have a systematic upvoter, I really don’t want to see this. It doesn’t have the same emotional overtones as downvotes, but it obscures the signal in the same way.
Another possibility is that a new reader, or more than one, is reading through the archives and voting on whatever they feel voteworthy. That’s fine as well.
I too have had some unexpected karma-jumps lately, and I feel the same way.
… Aaaand now I just lost about 40 within an hour or two, including downvotes on some obviously unobjectionable comments. Looks like someone’s taken a dislike to me. Anyone else had the same?
I recall this being the norm before the dark days of Euginiering.
I got this too, but I was probably the worst recipient of downvotes percentage-wise and the upvotes didn’t even make up for the downvotes yet in terms of absolute karma value (let alone in ratio, which would require getting many times the upvotes).
I also noticed that my recent upvotes included a fair number of 2s and 3′s and higher numbers, and there were some posts that didn’t get voted up at all—in other words, they were distributed in a way I would expect if the upvotes came from multiple people. The downvotes from Eugine were not distributed that way, making it a dead giveaway that they all came from one person.
Now I’m listed among the 15 “Top contributors,” which is absolutely not possible.
Given that we just got a new moderator, it might very well be that someone wants to test out the response about what happens when he goes and votes up systematically.
I personally also got similar jumps in my karma.
How do I build the habit of writing down a fleeting thought that seems interesting? Way too often I notice that I just wanted to do something or write something down. Or should I just accept the thought as gone?
I carry at all times a tiny notebook (smaller than my hand) and a pen so small I can barely use it comfortably. That’s low tech and not very efficient (because I’ll need to type it up later), but very quick, easily survives the sometimes inhospitable pocket environment, doesn’t need electricity and works for non-textual thoughts.
I do this too, and it holds my former wallet contents in a pocket, so not even an extra thing to track.
Do you have a smartphone? Just hit the voice command button and say “Make a note: [whatever you need to make a note of]”
Alternatively, I find it’s easier to do things when you’ve already started. Maybe make a note on your phone and add a couple ideas. Then, the next time you have an idea, you already have a dedicated place to put it, so you just put it there instead of wondering what to do with it and ending up doing nothing because you can’t get past the inertia of starting. Then award yourself a mental point as a reward to train your mind to keep coming up with ideas and writing them down.
And if you do forget something, don’t worry about it too much. The whole “I just had a really good idea but now I forgot, oh no!” pattern is really common. Much more common than the “I just had a great idea and I wrote it down and I still think it’s great” pattern. I’ve always taken this as evidence that the idea you forgot wasn’t actually that great and it only feels that way because you forgot about it and you’re suffering from a kind of forbidden-fruit/grass-is-greener bias. People tend to remember really good ideas because they’re contextual or actionable.
Here’s my system for that:
I always carry an LTE-connected smartphone capable of gesture typing, so I’m able to quickly write down anything whenever and wherever it occurs to me, be it in a park, in a forest, at work, on a toilet etc. (My personal preference is a high-end big-screen phone with a stock Android (currently Nexus 5), but as of September 17 2014, you can use iOS 8 with a custom keyboard).
I use several mobile apps intended for capturing different kinds of thoughts: Wunderlist, Trello, Google Docs. I prefer these apps because they all sync to the cloud, which means that 1) I can access the content on any platform, and 2) that the phone is essentially disposable and I won’t lose my notes when it gets lost or stolen.
Here’s how I capture thoughts:
If the thought is actionable, it goes to Wunderlist (a classic todo list app which I hate but alas, I can’t seem to find a better alternative).
If the thought is related to an ongoing project, it goes into an appropriate Google Doc or the Trello board of that project. If the thought is large enough, it may warrant the creation of its own Google Doc.
If the thought is related to self-improvement / self-discovery, it goes to a Trello board dedicated specifically to that.
I usually ask these as questions on Quora. Quora is incredibly tolerant of even inane questions, and has the benefit of allowing others to provide feedback (in the form of answers and comments to the question). If a question has already been asked, then you will also be able to read what others have written in response and/or follow that question for future answers. Quora also has the option of anonymizing questions. I’ve found that always converting my thoughts into questions has made me very conscious of what sort of questions are interesting to ask (not that there’s anything right with that).
Another idea is to practice this with writing down dreams. After waking up, I often think “It’s not really worth writing that dream down anyway”, whereas in reality I would find it quite interesting if I came back to it later. Forcing oneself to write thoughts down even when one is not inclined to may lead to more sedulous record-keeping. (But this is just speculation.)
I have Simplenote installed on my phone, and I pull things out and note them there very frequently.
(Later, they become blog posts/to-dos/etc)
get a twitter
Okay, I know next to nothing about Haskell, and next to nothing about provability logic, so maybe what I’m about to ask doesn’t make any sense, but here’s something that’s making me very curious right now. How do I implement a function like this:
using some typeclass like this:
The idea is that the implementation of loeb should follow the steps of this proof, and the methods of Prov should correspond to the assumptions of that proof. The question was inspired by this post by sigfpe, but he thought that the class should be Functor, which seems wrong to me.
Apologies for the formatting, it turns out LW collapses whitespace even in preformatted blocks.
I’m guessing that we get soundness :: a → p a for free in your notation?
I think you wanted loeb :: Prov p ⇒ (p a → a) → p a.
Scala implementation, I think:
Thanks a lot! But your soundness condition isn’t one of the assumptions of the theorem, and it’s still too strong for my taste. Maybe I should clarify more.
I want the implementation of loeb to actually be a proof, by Curry-Howard. That means the implementation needs to be translatable into a total language, because non-totality turns the type system into an unsound logic where you can prove anything. An extreme case is this silly implementation of loeb, which will happily typecheck in Haskell:
Sadly, your version of loeb can’t be made total. To see why, consider the identity wrapper type:
You immediately see that loeb specialized for Id is just the Y combinator with type (a → a) → a, which shouldn’t be implementable in a total language, because (a → a) → a isn’t true in any logic. To avoid that, the typeclass must have some methods that aren’t implementable for Id. But all methods in your typeclass are easily implementable for Id, therefore your loeb can’t be made total.
The same argument also rules out some other implementations I’ve tried. Maybe the root problem is the use of types like Fix, because they don’t have quite the same properties as the diagonal lemma which is used in the original proof. But I don’t really understand what’s going on yet, it’s a learning exercise for me.
I think the problem is hidden beneath the fixed point operator (I’ve edited the code to be more correct). For Id, is there actually a type Psi such that Psi <~> (Id[Psi] ⇒ A)? Isn’t that where the Y combinator comes in—the only reason loeb is able to act as a Y combinator is that it gets one passed in via the fixed point lemma? Isn’t a fixed point lemma always going to be the same as a Y combinator?
(I wish we were working with a proof that didn’t invoke this external lemma)
I asked the folks on /r/haskell, we hashed out a version in Agda and then I translated it into Haskell. It’s not completely in the spirit of the original question, but at least it’s a starting point. The code is here, you can try it out on CompileOnline.
ETA: now I also wrote a post about it.
Hmm, you’re right, in a total language you can’t define such a Psi for Id. Maybe the type class should go like this:
I don’t know if that’s enough. One problem is that fix and unfix are really inconvenient to use without type lambdas, which Haskell doesn’t have. Maybe I should try Scala, which has them.
Another problem is in step 13. To do that without soundness, I probably need versions of dup and mp that work inside p, as well as outside. But that makes the type class too complicated.
Does this Haskell code answer your question?
The two directions of isomorphism:
Not sure. Fix is a type-level Y combinator, while loeb is a value-level Y combinator.
I think that’s not possible. Löb’s theorem requires the diagonal lemma.
Functor is a weak typeclass; the only thing it implies about □ is that □(a→b)→□a→□b (which we already know to be true). So the idea of using Functor was just to make the minimum possible assumption.
I don’t know enough Haskell to answer your question, but if you can write it in Scala I’ll give it a go.
The problem is that Functor’s operation isn’t quite what you wrote, but rather (a→b)→□a→□b, which is way too strong. I don’t think it holds in provability logic. That’s why I want to define an even weaker typeclass Prov, but I’m not sure what methods it should have, apart from dup and mp.
If you give me some Scala code, I think I’ll be able to make sense of it :-)
You’re right, I was taking the linked thing at face value.
The signature given is almost exactly Comonad. If I’m reading this right, Loeb’s theorem gives you something vaguely interesting: it’s a function from C[A] ⇒ A to A. So it tells you that any function that “flattens” a comonad must have some kind of “zero” value—e.g. any Stream[A] ⇒ A must give rise to a distinguished value of type A—which you can extract without ever having an instance of Stream[A].
I’ve replied with Scala code upthread.
What supplements do people take?
I currently take Vitamin D, fish oil, creatine, lithium, iron, multivitamin and melatonin (at bedtime).
It would be interesting to also know their reasons, and if they notice positive effects.
I take calcium and vitamin D, prescribed for medical reasons. Nothing else. No real way to tell what the effect is short of DEXA scans, but those are x-rays, so you can’t do many of them. I’m not breaking any bones now, but I never did.
On what basis do you take Iron?
There is some decent evidence of correlation between iron deficiency and depression and anxiety, which is an issue I have.
I think it would make more sense to get yourself tested for the deficiency. I think the current view is that most people overconsume iron.
Iron.
From an article I’m reading:
It may be hard to tell without the context, but they are suggesting that these revised risk assessments would not be useful. My initial thought is: “If having an estimate is helpful, having a more accurate estimate would be better, and there seems to be a big difference between 1⁄500 and 1/1000.
Any thoughts?
Full article: https://d396qusza40orc.cloudfront.net/ethicalsocialgenomic/DeflatingTheGenomicBubble.pdf
There are common diseases you should worry about and rare diseases you shouldn’t worry about. A factor of 2 does not move Crohn’s from rare to common. The difference between a 70% chance of dying of heart disease and a 30% chance sounds pretty big, but what would you do differently? Either way, it is a big chunk of likely mortality. A factor of 2 is unlikely to change the cost-benefit analysis of actions that might protect you from heart disease. If such an action is useful, it is useful for most people.
Some rare genes do move diseases from rare to common. A broken BRCA (1 in 10k) moves a woman from a 10% chance of dying of breast cancer to an 80% chance of dying of breast cancer, and dying at a young age. Mammograms are valuable for the second woman and not for the first. Some women have prophylactic mastectomies. But if you ask Myriad to test your BRCA, in addition to this useful information, it will also talk about minor variations with useless effects on the risk.
Thanks for your reply.
Do you think it would be fair to say that for rare diseases (that are not determined by single loci mutations, like Huntington’s or BRCA, as you described) it’s silly to get a test because a small movement in your risk profile is meaningless in that it wouldn’t impact your treatment or behavior in a meaningful way?
Could you explain what you mean by:
Do you work in a related field? You explained this rather concisely, thanks.
Whether they are about rare diseases or common diseases, almost all results that you get out of 23andMe are silly because they don’t have rational effects on potential behavior. (They may have irrational effects—if you can use it motivate actions that you ought to be doing anyway, that’s great. But there are also bad irrational reactions.)
Depending on your genetics, your chance of dying of heart disease might be as low as 30% or as high as 70%. (I made up those numbers; I suspect the real range evaluable with current genetics is much narrower.) Even the low number, 30% is very high. If you have a 30% of dying of something, you should think about it and react to it. Even in the best case, you still have to think about heart disease.
A friend and I hope to host a MIRIxVancouver workshop in Vancouver, Canada sometime in October. We haven’t filed an application with MIRI yet, and we haven’t set a date, so there’s no schedule yet. So, this is just a shout-out to anyone who might want to get involved in it over a weekend, including if you want to visit from Seattle, or Oregon, or anywhere nearby. Comment below, or send me a PM, if you’re interested in attending.
Vancouver has enough of a diversity of people interested in the MIRI who can host their own friends that I believe it will make sense to host multiple different MIRIx workshops. Like, I find the MIRI very interesting, but I want to grasp technically what it’s about. Another one of my friends is interested in the philosophy of A.I., and yet another friend is a former MIRI intern who will invite a bunch of his friends from the university math department over. So, I’ll likely host workshops at different levels of depth, or with different topics.
I was thinking about anthropics after seeing some posts here about it. I read the series of posts on ADT including http://lesswrong.com/r/discussion/lw/8aw/anthropic_decision_theory_iv_solving_selfish_and/, and EY’s posts http://lesswrong.com/lw/17c/outlawing_anthropics_an_updateless_dilemma/, http://lesswrong.com/lw/19d/the_anthropic_trilemma/, and http://lesswrong.com/lw/17d/forcing_anthropics_boltzmann_brains/. I had a few questions about those posts.
First, how is average utilitarian defined in a non-circular way? I’m trying to wrap my head around why I don’t agree with the conclusions of the first post I linked, and it seems to come down to not understanding average utilitarians.
More specifically, do they define two levels of utility? Or do they exclude themselves from the calculation? I thought it was just a different way of allocating your own utility, but how do you calculate which way will give you the most utility by giving the world a greater average utility, without knowing the answer of your own utility to plug in?
Second, in http://lesswrong.com/lw/19d/the_anthropic_trilemma/ EY ended off with
Has he been officially “impressed” yet? Should I read any specific attempts to solve the trilemma? What reading can I do on anthropics to get an idea of the major ideas in the field?
It seems to me that SIA, in the way it’s been applied, is obviously correct, and in general I feel like I have very clear intuitions on these kind of problems. I plan on writing up something eventually, after I understand the argument against my point-of-view to argue coherently.
If you can quantify a proto-utility across some set of moral patients (i.e. some thing that is measurable for each thing/person we care about), then you can then call your utility the average of proto-utility over moral patients. For example, you could define your set of moral patients to be the set of humans, and each human’s proto-utility to be the amount of money they have, then average by summing the money and dividing by the number of humans.
I don’t necessarily endorse that approach, of course.
I think Eliezer says he’s still confused about anthropics.
So far as I know, Nick Bostrom’s book is the orthodox foremost work in the field. You can read it immediately for free here. Personally, I would guess that absorbing UDT and updateless thinking is the best marginal thing you can do to make progress on anthropics, but that’s probably not even a majority opinion on LW, let alone among anthropics scholars.
a
We managed to reduce performance on any number of tests to essentially a single number, g, together with a couple more for domain-specific skill. We managed to reduce the huge variation in personalities to five numbers, the OCEAN dimensions. I even recall reading that there is quite some correlation between those five numbers and that they might be reduced to a single one but I can’t find the source any more.
Can we construct a whole host of other, similar numbers, like “math skills” and thus measure the impact of education and aging?
Another number I have in mind is, can we construct three numbers general health gh, mental health mh and physical health ph, and measure their correlations? I have the vague observation that medical issues tend to cluster, that is people with mental issues tend to not only exhibit any one of ADHD, depression, OCD and so on, but more than one of them. Similarly I have the impression that people tend to complain of many physical symptoms at once.
I seem to recall that BMI and/or WHR tend to be excellent predictors of physical health. Together with a couple of more measures these predictions can further be improved. The advantage of having a single number would be for research purposes on population health and it is easier to have a single mumber for personal assesment.
Not quite reduce. We managed to develop certain approximations which, albeit crude, work sufficiently well for some purposes. Of course, not all purposes.
I seem to recall they tend not. In particular, BMI is a flawed indicator as it has a pronounced bias for short and tall people.
Which “these predictions”—what are you forecasting?
And muscular people. What’s wrong with WHR?
A high pressure is a good predictor for someone being unhealthy. On the other hand statins that reduce blood pressure don’t provide the returns that people hoped for.
Goodhard’s law applies very much.
Before dying with a heart attack Seth Roberts had a year where he improvement on the score that’s the best predictor for heart attacks, while most people don’t improve on the score as they age.
Using metrics like BMI and WHR seems to me very primitive. We should have no problem running a 3D scan of the whole body. I would estimate that obesitey[3D scan + complex algorithm] is a much better metric than obesity[BMI], obseity[WHR] or obesitey[BMI/WHR].
That’s to be further improved by not only going for the visible light spectrum but adding infrared to get information about temperature. And you can follow it up by giving the person a west with hundreds of electrodes and measuring the conductance.
The tricoder xprice is also interesting.
As quantified self devices get cheaper it will also be possible to use their data to develop new metrics. A nursing home could decide to give every member a device that tracks heart rate 24⁄7. After a few years time the can give the data to some university bioinformatics folks who try to get good prediction algorithms.
Math skills can mean multiple things to different people. Some people take it to mean the ability to calculate 34*61 in a short amount of time and without mistakes. Other people take it to mean doing mathematical proofs.
We might even find something more sophisticated than fat percentage. Not all fat people are ill/heading towards illness. Not all thin people are healthy.
Accumulation of fat to vital organs like the liver could be a better predictor. Fatty liver can be diagnosed via ultrasound, which is cheap.
Being fat is a risk even if you get sick for other reasons. Rehabilitation suffers.
Cite?
Fatty liver predicts the risk for cardiovascular events in middle-aged population: a population-based cohort study
Obesity and Inpatient Rehabilitation Outcomes Following Knee Arthroplasty: A Multicenter Study
Yes, we have to try many different metrics and see which ones work best and for what purposes.
It seems like there are a lot of fan-fiction fans here. Fan-fiction fans, I am curious as to what draws you to the fan-fiction of which you are fans. Is part of it that you’re fans of other fan-fiction fans? I guess that depending on the cosplay you could even be a fan of fan-fiction fans’ fans.
This is a different question, but I’ve occasionally wondered why some franchises (like Star Trek, Buffy, or Harry Potter) generate a lot of fanfic and others generate much less. Part of this is raw popularity, of course, but quite a bit isn’t; the film Avatar (the one with blue aliens, not the one with kung fu) was far more popular than, say, Pirates of the Caribbean, but the latter spawned a thriving fic community and the former has a smattering of stories mostly intended to illustrate critical points.
I don’t think there’s any single answer, but a franchise’s chances seem to be improved if: it’s suitable for episodic storytelling (Pirates is a self-contained story, but it’s framed like an entry in a serial); it’s got strong and ideally archetypal characters (Kirk, McCoy, Spock: action/emotion/reason, easy to write but easy to give depth to); and it’s got an open setting with a lot of depth and unexplored bits (few settings outside spec-fic generate a lot of fanfic, and most of those that do are period pieces or procedurals). We’re looking at works as toolkits for storytelling, in other words; tight plotting might actually be detrimental.
Star Trek is a special case because back when it was created, there weren’t a lot of geekish series one could write fanfic about unless you resorted to books. There was no anime fandom, comic books were aimed at much younger people, and non-anthology TV genre fiction with enough merit to gain a fanbase was rare.
I cannot parse your question. Can you rephrase?
I should probably update this prediction. Considering Yudkowsky’s recent pwnedness and pieces like this becoming common it is at least 10%.
Moldbug in 2008.
Is this post just an excuse for an NRxic quote? And what does the “Yudkowsky’s recent pwnedness” swipe refer to?
Probably the idea of relocating to a place outside of the US where it’s easier to get visas.
No. I honestly think the probability of this prediction coming true has increased.
If that’s the case why don’t you simple update your predictionbook entry?
Trivial inconvenience of forgetting my password, also I wanted to talk about it with other people.
Edit: Predictionbook entry updated.
Software like KeePass really helps for that purpose.
For what it’s worth, when I asked Nyan Sandwich and Nydwracu whether they thought Eliezer Yudkowsky was pwned, they cited his support for open borders and his evangelical polyamory as evidence that he was, indeed, pwned.
I’ll bet a thousand 2014 dollars at even odds that it’s less than a quarter of that.
(Clarification: I’m talking about the population of the US, as per the PredictionBook entry, not of North America as per Moldbug’s quote; the population of the latter is already over 500 million. Should probably stipulate present borders too, just in case. 2 billion isn’t credible in any case, though.)
I’m not under the impression he was much less pwned in 2007. Are you thinking of something in particular?
Two billion is a crazy population prediction even if open borders was enacted. Relative quality of life would decline very quickly with open borders, and the immigration level would slow down dramatically.
I also read some estimate that “only” 500 Million people world wide want to immigrate to the US. Overall I expect the quality-of-life gap between USA and the 3rd world to continue declining over the next couple decades.
I’m dubious about borders getting opened that soon, considering how long it’s taking to make moderate moves towards drug legalization.
Curiously, the anti-immigration movement in the US would be very different than those in Western Europe, and, I would guess, significantly weaker. While economic arguments are somewhat similar in both countries (although e.g. studies differ whether (and what level of) immigration actually increases unemployment levels as in the short run immigrants seem to go to the countries where the unemployment is decreasing), in Western Europe immigrants are vastly overrepresented in crime statistics compared to local population, which is not the case in the US. Nor (it was my impression, I am not from the US) immigrants are often thought as a demographic group whose individuals are the most prone to commit crimes (it must be noted, however, that immigrants aren’t a homogeneous group (both in Western Europe and the US) and their effect (and/or the perception thereof) on a country of destination might differ), and as they are not at the top of crime statistics, they are less likely to be a target of blame. Furthermore, it was my impression that due to the long history of immigration, the national identity of the US is not based on ethnicity, while in most European countries it definitely is. As you can see, it is not surprising that Americans are somewhat more likely than Europeans to think that immigration should be increased, and about two thirds of them think that on the whole immigration is a good thing, only in Sweden we find similar support for it.
Nevertheless, even given this unusually positive attitude towards immigration, I would guess that the US population reaching even 600 million (let alone 2 billion, which I believe must have been a hyperbole) by 2038 (U.S. Census Bureau projects approximately 400 millions) has a probability less than 1-2 percent. The reason is that while for a pundit it is often a good strategy to make bold claims about e.g. opening all borders, as it gains him attention (therefore I can believe that we will hear a lot of such claims from people who compete for attention) whether or not the claims about doubling World’s GDP are correct (in the long run it may be correct, I do not know), it might not be such a good thing for a politician or a civil servant to do bold actions as it is a risk of losing control of the situation and/or getting fired, and, as a general rule, politicians and civil servants want neither of those, therefore , I would guess, they have to more cautious when they become actual decision makers. Therefore, even if one wants to promote the idea of open borders and free mobility, one could probably try to encourage more Schengen/EU style regional agreements and then gradually “merging” them with new bilateral agreements between those unions, as it seems less risky than simply welcoming all immigrants. And in the far future it may happen that large parts of the world is covered by a Schengen-like agreement making “nobody illegal” in a similar sense that it is relatively easy for a person from one EU country to work in another. But that would probably take much more than 25-30 years. However, I would guess that even then it seems highly unlikely that US population would exceed even 1 billion, let alone 2 billion, since as the economies of the developing countries improve, there will be less incentives for people to leave them for the US. One would expect a huge influx of immigrants only if US government loosens immigration restrictions “faster” than developing countries manage to improve.
Good counter-argument, updated
Though I will point out civil servants in the position to decide such things are practically unfierable and that politicians’ public persona are down-stream from public opinion, if the media and academia that are mostly upstream decide open borders really is a moral crime akin to segregation (not hard since it fundamentally is segregation—not that I think this in itself makes it immoral), then public opinion would try to resist by a few populist politicians but would eventually succumb like it has on all other issues where its interests or opinions were pitted against the former.
As a counterpoint look at the rise of anti-immigration movements in Europe and e.g. the success of Marine Le Pen in France.