as a specific sub-field thereof: dealing with procrastination and akrasia
[pollid:621]
statistics, probability theory, decision theory and related mathematical fields
[pollid:622]
(moral) philosophical theories (tried to make this sharp somehow but failed)
[pollid:623]
rationality applied to social situations in relationships, parenting and small groups
[pollid:624]
platform to hangout with like-minded (and often high-IQ) people
[pollid:625]
artificial intelligence topics esp. if related to AGI, (U)FAI, AI going FOOM (or not)
[pollid:626]
the singularity and transhumanism (includes cryonics as method to get there)
[pollid:627]
organization and discussion of meetups
[pollid:628]
presentation and discussion of topics of associated or related organizations CFAR, MIRI, GiveWell, CEA
[pollid:629]
I chose this poll because I want to use it to validate a presentation I am preparing for a meetup about what does constitute a typical LessWrong topic (and giving examples of such). If this works out it might provide a helpful primer for LW newbies (e.g. at a meetup).
I’d be a lot more inclined to respond to this if I didn’t need to calculate probability values (ie. could input weights instead, which were then normalized.)
To that end, here is a simple Python script which normalizes a list of weights (given as commandline arguments) into a list of probabilities:
#!/usr/bin/python
import sys
weights = [float(v) for v in sys.argv[1:]]
total_w = sum(weights)
probs = [v / total_w for v in weights]
print ('Probabilities : %s' % (", ".join([str(v) for v in probs])))
Useful script, but I’m not sure it’s necessary for this question. These aren’t exclusive or compared to each other. Each probability is independent, “likelihood that this topic will fit into lesswrong expectations”.
Rate how typical the following topics are on/for LessWrong (0 means totally atypical, 1.0 means totally on track):
methods for being less wrong, knowing about biases, fallacies and heuristics [pollid:618]
advancing specific virtues: altruism, mindfulness, empathy, truthfulness, openness [pollid:619]
methods of self-improvement (if scientifically backed), e.g. living luminiously, winning at life, longevity, advice in the repositories http://lesswrong.com/lw/gx5/boring_advice_repository/ [pollid:620]
as a specific sub-field thereof: dealing with procrastination and akrasia [pollid:621]
statistics, probability theory, decision theory and related mathematical fields [pollid:622]
(moral) philosophical theories (tried to make this sharp somehow but failed) [pollid:623]
rationality applied to social situations in relationships, parenting and small groups [pollid:624]
platform to hangout with like-minded (and often high-IQ) people [pollid:625]
artificial intelligence topics esp. if related to AGI, (U)FAI, AI going FOOM (or not) [pollid:626]
the singularity and transhumanism (includes cryonics as method to get there) [pollid:627]
organization and discussion of meetups [pollid:628]
presentation and discussion of topics of associated or related organizations CFAR, MIRI, GiveWell, CEA [pollid:629]
I chose this poll because I want to use it to validate a presentation I am preparing for a meetup about what does constitute a typical LessWrong topic (and giving examples of such). If this works out it might provide a helpful primer for LW newbies (e.g. at a meetup).
I derived the following list of LessWrong topics and presented in on our LW meetup.
In order of decreasing typicality (most typical for LW first):
methods for being less wrong, knowing about biases, fallacies and heuristics
methods of self-improvement (if scientifically backed), e.g. living luminiously, winning at life, longevity
organization and discussion of meetups
dealing with procrastination and akrasia
statistics, probability theory, decision theory and related mathematical fields
topics of associated or related organizations CFAR, MIRI, GiveWell, CEA
advancing specific virtues: altruism, mindfulness, empathy, truthfulness, openness
artificial intelligence topics esp. if related to AGI, (U)FAI, AI going FOOM (or not)
the singularity and transhumanism (includes cryonics as method to get there) - this had the largest variance
rationality applied to social situations in relationships, parenting and small groups—this also had a large variance
(moral) philosophical theories, ethics
platform to hangout with like-minded smart people
I’d be a lot more inclined to respond to this if I didn’t need to calculate probability values (ie. could input weights instead, which were then normalized.)
To that end, here is a simple Python script which normalizes a list of weights (given as commandline arguments) into a list of probabilities:
Produces output like this:
Useful script, but I’m not sure it’s necessary for this question. These aren’t exclusive or compared to each other. Each probability is independent, “likelihood that this topic will fit into lesswrong expectations”.
In what way does being typical imply a probability?
It doesn’t and I didn’t claim it did.
I just abused the 0..1 interval for typicality because the poll provides range checking in this case.