Thanks. I’ve edited the comment to reflect this better.
Arran_Stirton
Since I wanted a lot of things to be weighted when determining the search order, I considered just hiding all the complexity ‘under the hood’.
The way I view it, search rankings are a tool like any other. In my own experience in academic research I’ve always found that clearly defined search rankings are more useful to me than generic rankings; if you know how the tool works, it’s easier to use correctly. That said, there’s probably still a place for a complex algorithm alongside other search tools, it just shouldn’t be the only search tool.
But if people don’t know what they are voting on they might be less inclined to vote at all.
Well I think it’s more a matter of efficiently extracting information from users. Consider the LessWrong karma system, while it serves its purpose of filtering out spam, its a very noisy indicator of anything other than ‘people thought this comment should get karma’. This is because some users think that we should vote things up or down based on different criteria, such as: do I agree with this comment?; did this comment contain valuable information for me?; was this an amusing comment?; was this comment well reasoned?; and so on.
By clearly defining the voting criteria, you’re not just making users more likely to vote, you’re also more efficiently extracting information out of them. From a user perspective this can be really useful, knowing that a particular rating is the popularity or the importance of a project, they can then choose whether they want to pay attention to or ignore that metric.
In case it helps, here’s a rough list of the thoughts that have come to mind:
Simplicity is usually best with voting systems. It may be worth looking at a reddit style up/down system for popularity. With importance you probably want high/mid/low. If you track the ‘importance profile’ of a user, you could use that to promote projects to their attention that other users with similar profiles find important. Also, in all these rankings it should be clear to the user exactly what metric is being used.
Make use of the wisdom of crowds by getting users to evaluate projects/tasks/comments for things like difficulty, relevance, utility, marginal utility—along the lines of this xkcd comic.
It seems to me that good open source management tool should direct attention to the right places. Having inbuilt problem flags that users can activate to have the problem brought to the attention of someone who can solve it seems like a good idea.
Skill matching. Have detailed skill profiles for users and have required skills flagged up for tasks.
Could try breaking projects up into a set of tasks, sub-tasks and next actions a-la Getting Things Done
Duolingo provides free language courses. They plan to make this financially viable by crowd sourcing translations from their students. Perhaps a similar thing could be implemented—maybe by getting university students involved.
Gamification across a broad range of possible tasks. Give points for things like participation, research, providing information. While rewarding programmers for coding is good, we should seek to reward anything that lowers the activation energy of a task for someone else.
Keep a portfolio of work that each user has completed in a format that is easy for them to access, customize and print out and reference in job applications.
Encourage networking between users with similar skills, areas of interest and the like. This would provide a benefit to being part of the community.
You could have a Patreon like pledging system where people pledge a small amount to projects they consider important. When the project reaches a milestone the contributors then get rewarded a portion of the pledge.
I’m struck with Dumbledore’s ruthlessness
Actually I think he was just following his own advice:
While survives any remnant of our kind, that piece is yet in play, though the stars should die in heaven. [...] Know the value of all your other pieces, and play to win.
All things considered I think it was the most compassionate choice he could have made.
No problem, some other things that come to mind are:
It’s best to start the articles with a ‘hook’ paragraph rather than an image, particularly when the image only makes sense to the reader if they know what the article is about.
Caption your images always and forever.
This has been said before, but the title should make sense to an uninitiated reader. Furthermore to make it more share-able, the title should set up an expectation of what the article is going to tell them. An alternative in this case could be: “What do people really thinking of you?”; or if you restructure the article, something like “X truths about what people think of you,” *For popular outreach the inferential distance has to be as low as you can make it; if you can explain something instead of linking to it, do that.
Take a look at the most shared websites(upworthy, buzzfeed and the likes), you can learn a lot from their methodology.
(As a rule, using non-standard formatting when posting to LessWrong is a bad idea.)
There are some improvements you can make to increase cognitive ease, such as lowering your long-word count, avoiding jargon, and using fewer sentences per paragraph. I’d recommend running parts of your post (one paragraph at a time is best) through a clarity calculator to get a better idea of where you can improve.
You may also want to look into the concept of inferential distance.
Nice article, have a karma!
There’s a lot of information there, I’d suggest perhaps using this article as the basis for a four part series one each area. The content is non-obvious, so having the extra space to really break down the inferential distance into small steps so that the conclusions are intuitive to non-rationalists would be useful.
(As an aside I suspect that writing for the CFAR blog is right now reasonably high impact for the time investment. Personally I found CFAR’s apparent radio-silence since September unnerving and it’s possible that it was part of the reason the matching fundraiser struggled. Despite Anna’s excellent post on CFAR’s progress the lack of activity may have caused people to feel as though CFAR was stagnating and thus be less inclined to part with their money on a System 1 level.)
Donated $180.
I was planning on donating this money, my yearly ‘charity donation’ budget (it’s meager—I’m an undergraduate), to a typical EA charity such as the Against Malaria Foundation; a cash transaction for the utlilons, warm fuzzies and general EA cred. However the above has forced me to reconsider this course of action in light of the following:
The possibility CFAR may not receive sufficient future funding. CFAR expenditure last year was $510k (ignoring non-staff workshop costs that are offset by workshop revenue) and their current balance is something around $130k. Without knowing the details, a similarly sized operation this year might therefore require something like $380k in donations (a ballpark guesstimate, don’t quote me on that). The winter matching fundraiser has the potential to fund $240k of that, so a significant undershoot would put the organization in precarious position.
A world that has access to a well written rationality curriculum over the next decade has significant advantage over one that doesn’t. I already accept that 80,000 hours is a high impact organization and they also work by acting as an impact multiplier for individuals. Given that rationality is an exceptionally good impact multiplier I must accept that CFAR existing is much better than it not existing.
While donations to a sufficiently-funded CFAR are most likely much lower utility than donations to AMF, donations to ensure CFAR’s continued existence are exceptionally high utility. For comparison (as great as AMF is) diverting all donations from Wikipedia to AMF would be a terrible idea, as would over funding Wikipedia itself. The world gets a large amount of utility out of the existence of at least one Wikipedia, but not a great deal of marginal utility by an over funded Wikipedia. By my judgement the same applies to CFAR.
CFAR isn’t a typical EA cause. This means that while if I don’t donate to keep AMF going, another EA will. However if I don’t donate to keep CFAR going there’s a reasonable chance that someone else won’t. In other words my donations to CFAR aren’t replaceable.
To put my utilons where my mouth is, it looks like the funding gap for CFAR is something like ~400k a year. GiveWell reckons that you can save a life for $5k by donating to the right charity. So CFAR costs 80 dead people a year to run, so there’s the question: do I think CFAR will save more than 80 lives in the next year? The answer to that might be no, even though CFAR seems to be instigating high-impact good, but if I ask myself do I think CFAR’s work over the next decade will save more than 800 lives? the answer becomes a definite yes.
As far as I understand it, CFAR’s current focus is research and developing their rationality curriculum. The workshops exist to facilitate their research, they’re a good way to test which bits of rationality work and determine the best way to teach them.
In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world
In response to the question “Are you trying to make rationality part of primary and secondary school curricula?” the CFAR FAQ notes that:
We’d love to include decisionmaking training in early school curricula. It would be more high-impact than most other core pieces of the curriculum, both in terms of helping students’ own futures, and making them responsible citizens of the USA and the world.
So I’m fairly sure they agree with you on the importance of making broad improvements to education. It’s also worth noting that effective altruists are among their list of clients, so you could count that as an effort toward alleviating poverty if you’re feeling charitable.
However they go on to say:
At the moment, we don’t have the resources or political capital to change public school curricula, so it’s not a part of our near-term plans.
Additionally, for them to change public-school curricula they have to first develop a rationality curriculum, precisely what they’re doing at the moment—building a ‘minimum strategic product’. Giving “semi-advanced cognitive self-improvement workshops to the Silicon Valley elite” is just a convenient way to test this stuff.
You might argue for giving the rationality workshops to “people who have not even heard of the basics” but there’s a few problems with that. Firstly the number of people CFAR can teach in the short term is tiny percentage of the population, not where near enough to have a significant impact on society (unless those people are high impact people, but then they’ve probably already hear of the basics). Then there’s the fact that rationality just isn’t viewed as useful in the eyes of the general public, so most people won’t care about learning to become so. Also teaching the basics of rationality in a way that sticks is quite difficult.
Mind, if what you’re really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.
I don’t think CFAR is aiming to propagandize any worldview; they’re about developing rationality education, not getting people to believe any particular set of beliefs (other than perhaps those directly related to understanding how the brain works). I’m curious about why you think they might be (intentionally or unintentionally) doing so.
Are you sure precommitment is a useful strategy here? Generally the use of precommitments is only worthwhile when the other actors behave in a rational manner (in the strictly economic sense), consider your precommitment credible, and are not willing to pay the cost of you following through on your precommitment.
While I’m in no position to comment on how rational your parents are, it’s likely that the cost of you being upset with them is a price they’re willing to pay for what they may conceptualize as “keeping you safe”, “good parenting” or whatever their claimed good intentions were. As a result no amount of precommitment will let you win that situation, and we all know that rationalists should win.
The optimal solution is probably the one where your parents no longer feel that they should listen to your phone calls or use physical coercion in the first place. I couldn’t say exactly how you go about achieving this without knowing more about your parents’ intentions. However you should be able to figure out what their goal was and explain to them how they can achieve it without using force or eavesdropping on you.
I think we’re using different definitions of virtue. Whereas I’m using the definition of virtue as a a good or useful quality of a thing, you’re taking it to mean a behavior showing high moral standards. I don’t think anyone would argue that the 12 virtues of rationality are moral, but it is still a reasonable use of English to describe them as virtues.
Just to be clear: The argument I am asserting is that ChrisHallquist is not in any way suggesting that we should rename rationality as effective altruism.
I hope this makes my previous comment clearer :)
This is an example of why I suspect “effective altruism” may be better branding for a movement than “rationalism”.
I’m fairly certain ChrisHallquist isn’t suggesting we re-brand rationality ‘effective altruism’, otherwise I’d agree with you.
As far as I can tell he was talking about the kinds of virtues people associate with those brands (notably ‘being effective’ for EA and ‘truth-seeking’ for rationalism) and suggesting that the branding of EA is better because the virtue associated with it is always virtuous when it comes to actually doing things, whereas truth-seeking leads to (as he says) analysis paralysis.
TLDR: I managed to fix my terrible sleep pattern by creating the right habits.
I’ve been there, up until a month ago actually.
I’ve tried a whole slew of things to fix my sleeping pattern over the past couple of years. F.lux, conservative use of melatonin, and cutting down on caffeine all helped but none of them really fixed the problem.
What I found was that I’d often stay up late in order to get more done, and it would feel like I was getting more done (where in actual fact I was just gaining more hours now in exchange for losing more hours in the future). Alongside this my pattern was so hectic that any attempt to sleep at a “normal” time was thwarted by a lack of tiredness, I could use melatonin to ‘reset’ this, but it’d rarely stay that way.
The first thing that helped was sitting down and working out hour by hour how much time I actually have in a week; this prevented me from thinking I could gain more time by staying up later. The second thing was forming good habits around my sleep. Habit’s typically follow a trigger-routine-reward pattern and require fairly quick feedback. As a result building a habit where the routine is sleeping for eight hours is quite hard.
Instead I appended two patterns either side of the time I wished to sleep, the first with the goal of making it easier for me to sleep, and the second with the goal of making it easier for me to get up.
The pre-sleep pattern followed:
Cue: ‘Hey it’s 10:30pm’
Routine:Turning off technology->Reading->Meditation
Reward: Mug of hot-chocolate
While the post-sleep pattern followed:
Cue: Alarm goes off,
Routine: Get out of bed.
Reward: Breakfast.
Since doing this I’ve been awake at 8 am every morning with little trouble, and the existence of those habits has made easy to add other habits into my routine. Breakfast, for example, is now a cue to go out running on days when I don’t have lectures (this is very surprising for me, I’ve received several comments along the lines of “Who are you and what have you done with the real you” since I began doing this).
I hope you find this useful.
Fixed it. I don’t think I’ve ever consciously registered that adsorb != absorb, so thanks for that.
Good point, thanks, fixed it.
A brief summary of effective study methods
Thanks for the pointers; I’ll make the changes you’ve proposed and move it to main at some point over the next day.
Look up one or two sequences or other posts for which this could be a follow-up.
I’m having trouble finding an appropriate post, did you have a particular one in mind?
As far as I can tell killing/not-killing a person isn’t the same not-making/make a person. I think this becomes more apparent if you consider the universe as timeless.
This is the thought experiment that comes to mind. It’s worth noting that all that follows depends heavily on how one calculates things.
Comparing the universes where we choose to make Jon to the one where we choose not to:
Universe A: Jon made; Jon lives a fulfilling life with global net utility of 2u.
Universe A’: Jon not-made; Jon doesn’t exist in this universe so the amount of utility he has is undefined.
Comparing the universes where we choose to kill an already made Jon to the one where we choose not to:
Universe B: Jon not killed; Jon lives a fulfilling life with global net utility of 2u.
Universe B’: Jon killed; Jon’s life is cut short, his life has a global net utility of u.
The marginal utility for Jon in Universe B vs B’ is easy to calculate, (2u—u) gives a total marginal utility (i.e. gain in utility) from choosing to not kill Jon over killing him of u.
However the marginal utility for Jon in Universe A vs A’ is undefined (in the same sense 1⁄0 is undefined). As Jon doesn’t exist in universe A’ it is impossible to assign a value to Utility_Jon_A’, as a result our marginal (Utility_Jon_A—Utility_Jon_A’) is equal to (u - [an undefined value]). As such our marginal utility lost or gained by choosing between universes A and A’ is undefined.
It follows from this that the marginal utility between any universe and A’ is undefined. In other words our rules for deciding which universe is better for Jon break down in this case.
I myself (probably) don’t have a preference for creating universes where I exist over ones where I don’t. However I’m sure that I don’t want this current existence of me to terminate.
So personally I choose maximise the utility of people who already exist over creating more people.
Eliezer explains here why bringing people into existence isn’t all that great even if someone existing over not existing has a defined(and positive) marginal utility.
As far as I know there’s no single sure-fire way of making sure that asking them won’t put them in a position where refusal will gain them negative utility (for example, their utility function could penalize refusing requests as a matter of course) . However general strategies could include:
Not asking in-front of others, particularly members of their social group. (Thus refusal won’t impact upon their reputation.)
Conditioning the request on it being convenient for them (i.e. using phrasing such as “If you’ve got some free time would you mind...”)
Don’t give the impression that their help is make or break for your goals (i.e. don’t say “As you’re the only person I know who can do [such&such], could you do [so&so] for me?”)
If possible do something nice for them in return, it need not be full reciprocation but it’s much harder to resent someone who gave you tea and biscuits, even if you were doing a favor for them at the time.
Of course there’s no substitute for good judgement.
Spot on. This is a big problem is mathematics education; prior to university a lot of teaching is done without paying heed to the fundamental concepts. For example—here in the UK—calculus is taught well before limits (in fact limits aren’t taught until students get to university).
Teaching is all about crossing the inferential distance between the student’s current knowledge and the idea being taught. It’s my impression that most people who say “you just have to practice,” say as such because they don’t know how to cross that gap. You see this often with professors who don’t know how to teach their own subjects because they’ve forgotten what it was like not knowing how to calculate the expectation of a perturbed Hamiltonian. I suspect that in some cases the knowledge isn’t truly a part of them, so that they don’t know how to generate it without already knowing it.
Projects are a good way to help students retain information (the testing effect) and also train appropriate recall. Experts in a field are usually experts because they can look at a problem and see where they should be applying their knowledge—a skill that can only be effectively trained by ‘real world’ problems. In my experience teaching A-level math students, the best students are usually the ones that can apply concepts they’ve learned in non-obvious situations.
You might find this article I wrote on studying interesting.