Somehow, LW/MIRI can’t disentangle research and weirdness. Vassar is one of the guys when make public interviews end up giving this impression.
Bruno_Coelho
Bet if companies cut in half the number of ‘meetings’, the productivity gain would be good enough to make a 40h/week for a lot of workers.
The economic implications of reading LW should be put somehow on the census. Human resources is something the rationality cluster has a lot. Imagine people being paid for insights they put here.
I suspect people actually have defined goals but are not specific enough about actions.
This anti-academic feeling is something I associate with lesswrong, mostly because people can find programming jobs without necessarily having a degree.
Apparently you don´t need a argument to be a nationalist. Guess this is just system 1 working.
Seems a good test to reactivate LW dynamics.
Learn math too, to understand data structures, graphs, algoritms and all the basic CS stuff.
Both positive and negative black swans . Aditionally: randomness and regression to the mean.
In em scenario we set rich people as the first ems. Don’t know how broad this is, but Robin expect a small group of people with lots of copies.
This is academic habit, but vulnerable to group bias. Normally, you don’t send drafts to experts who strong disagree with your statements, but close friends who wants to read what you write.
I’ve see only a math post. Do you plan to write in what kind of topics?
Students are often quite capable of applying economic analysis to emotionally neutral products such as apples or video games, but then fail to apply the same reasoning to emotionally charged goods to which similar analyses would seem to apply. I make a special effort to introduce concepts with the neutral examples, but then to challenge students to ask wonder why emotionally charged goods should be treated differently.
-- R. Hanson
The Porverty of Historicism is another book. BTW, the overall approach, making theories restrictive enough.
I think people who blog normally expose inconclusive thoughts or drafts, but not complete solutions. Or wants to teach more people, and build a community. In academic format, this is not so easy.
Benatar assimetry between life and death make B the best option. But as his argument is hard to accept, A is better, whatever human values the AI implement.
Interested too. And will be particularly useful if consider topics which are not easy to find in other sources: giving, x-risk, disruptive technologies, cryonics.
For some reason, (old)lesswrongers end up optimizing to reactionary themes. I wonder why, if is just signaling or a serious thing.
The boundaries of relevante is something to think. A lot of places outside LW have discussions. Political topics was a thing back then, but now apparently people mention is Open Threads, and the most frequent talkers are still posting elsewhere. EA emerge, and with good coordination. However, this does not mean we should stop possible dynamical changes.