What are the options for free MOOC platforms these days? Moodle’s the only one that comes to mind, and it’s not optimized for MOOCs.
Slackson
How do you plan to measure focus? Just subjective effects, or are you using QuantifiedMind, or pomodoro success rate, or something?
More meetup posts clutter Discussion (which is kinda bad) but mean that people are actually going to meetup groups (which is kinda awesome). Maybe frame a meetup post not as a trivial inconvenience, but evidence that rationalists are meeting in person and having cool discussions and working on their lives instead of hanging around in Less Wrong.
When there’s a lot of interesting content here, sometimes people ask why we’re all sticking around talking about talking about rationality instead of doing stuff out in the world.
Meetup : Auckland Preliminary Meetup
Point, but I did suggest several ways in which this could be encouraged (pinned threads, different stated lifespans, shared use of Latest Open Thread feed)
Reducing the visibility of the new threads could help too.
How about overlapping thread lifespans? This way when a new thread is created, recent comments on the previous thread won’t go unread, and discussion can still happen there. A thread on Monday that lasts a week and a thread on Thursday does too, for example, with both threads pinned to the top and included under the Latest Open Thread feed on the side. I suspect this would be easier to implement than your second option. It’s more difficult to implement than your first and third options, though.
If I live forever, through cryonics or a positive intelligence explosion before my death, I’d like to have a lot of people to hang around with. Additionally, the people you’d be helping through EA aren’t the people who are fucking up the world at the moment. Plus there isn’t really anything directly important to me outside of humanity.
Parasite removal refers to removing literal parasites from people in the third world, as an example of one of the effective charitable causes you could donate to.
I can’t speak for you, but I would hugely prefer for humanity to not wipe itself out, and even if it seems relatively likely at times, I still think it’s worth the effort to prevent it.
If you think existential risks are a higher priority than parasite removal, maybe you should focus your efforts on those instead.
Implicit-association tests are handy for identifying things you might not be willing to admit to yourself.
Once EA is a popular enough movement that this begins to become an issue, I expect communication and coordination will be a better answer than treating this like a one-shot problem. Maybe we’ll end up with meta-charities as the equivalent of index funds, that diversify altruism to worthy causes without saturating any given one. Maybe the equivalent of GiveWell.org at the time will include estimated funding gaps for their recommended charities, and track the progress, automatically sorting based on which has the largest funding gap and the greatest benefit.
I doubt that at any point it will make sense for individuals should be personally choosing, ranking, and donating their own money to charities as if they’re choosing the ratios for everyone TDT-style, not least because of the unnecessary redundancy.
EDIT: Upvoted because it is a valid concern. The AMF reached saturation relatively quickly, and may have exceeded the funding it needed. I just disagree with the efficiency of this particular solution to the problem.
I would assume that it’s considered worse than death by some because with death it’s easier to ignore the opportunity cost. Wireheading makes that cost clearer, which also explains why it’s considered negative compared to potential alternatives.
I used to read a lot in class, and the teachers didn’t care because they were focused on teaching students that needed more help. I had a calculator I played with, and found things like 1111^2 = 1234321, and tried to understand these patterns. I discovered the Collatz Conjecture this way, began to learn about exponential functions, etc.
I also learned to draw probability trees from an explanation of the Monty Hall problem I read once, and I think learning that at a young age helped Bayesianism feel intuitive later on, and it was a fun thing to learn.
Second the Anki recommendation, but I’m not sure it’s the most fun thing.
Writing fiction was something I enjoyed too, and improved my communication skills.
It’s highly relevant to your second point.
Newcomb-like problems are the ones where TDT outperforms CDT. If you consider these problems to be impossible, and won’t change your mind, then you can’t believe that TDT satisfies your requirements.
Currently working on a Django app to create directed acyclic graphs, intended to be used as dependency graphs. It should be accessible enough to regular consumers, and I plan to extend it to support to-do lists and curriculum mapping.
I need to work on my JavaScript skills. The back-end structure is easy enough, but organising how the graphs are displayed and such is proving more challenging, as well as trying to make a responsive interface for editing graphs.
TDT performs exactly as well as CDT on the class of problems CDT can deal with, because for those problems it essentially is CDT. So in practice you just use normal CDT algorithms except for when counterfactual copies of yourself are involved. Which is what TDT does.
Yes, it’s a Newcomb-like problem. Anything where one agent predicts another is. People predict other people, with varying degrees of success, in the real world. Ignoring that when looking at decision theories seems silly to me.
Didn’t the paper show TDT performing better than CDT in Parfit’s Hitchhiker?
This is essentially what the TDT paper argues. It’s been a while since I’ve read it, but at the time I remember being sufficiently convinced that it was strictly superior to both CDT and EDT in the class of problems that those theories work with, including problems that reflect real life.
Can blackmail kinds of information be compared to things like NashX or Mutually Assured Destruction usefully?
Most of my friends have information on me which I wouldn’t want to get out, and vice versa. This means we can do favours for each other that pay off asynchronously, or trust each other with other things that seem less valuable than that information . Building a friendship seems to be based on gradually getting this information on each other, without either of us having significantly more on one than the other.
I don’t think this is particularly original, but it seems a pretty elegant idea and might have some clues for blackmail resolution.
Erm, the monetary system is generally a pretty efficient way to get anything done. Things like division of labour and comparative advantage are pretty handy when it comes to charity too.