If FOOMing doesn’t move us past the near/barely trans-human level too quickly, another policy area to consider could be immigration. Humans have a bad history of responding to outgroups and the patterns of those responses seem very similar across political and social conditions. Obviously just a piece of the puzzle, but might be worth tossing into the mix.
pslunch
Thank you for the clarification. While I have a certain hesitance to throw around terms like “irredeemable”, I do understand the frustration with a certain, let’s say, overconfident and persistent brand of misunderstanding and how difficult it can be to maintain a public forum in its presence.
My one suggestion is that, if the goal was to avoid RobbBB’s (wonderfully high-quality comments, by the way) confusion, a private message might have been better. If the goal was more generally to minimize the confusion for those of us who are newer or less versed in LessWrong lore, more description might have been useful (“a known and persistent troll” or whatever) rather than just providing a name from the enemies list.
I’m confused as to the reason for the warning/outing, especially since the community seems to be doing an excellent job of dealing with his somewhat disjointed arguments. Downvotes, refutation, or banning in extreme cases are all viable forum-preserving responses. Publishing a dissenter’s name seems at best bad manners and at worst rather crass intimidation.
I only did a quick search on him and although some of the behavior was quite obnoxious, is there anything I’ve missed that justifies this?
Another major factor for grads or advanced undergrads is the research that the professor is doing. This primarily comes into play during office hours (which are generally empty except before exams). Especially for established figures, this can be the only chance you’ll have to get to know them (most are far too busy to take casual callers).
Even the worst lecturers are sometimes extraordinary one-on-one and, even when not, people doing interesting work tend to have far more than average contacts with other people doing interesting work. Show them that you’re engaged and curious and invisible doors will open to you (many of the most interesting positions are never posted, but instead filled by qualified recommendations).
As an irregular consumer of LW, a fine grained sub system would be fantastically useful since, without having to sort through a lot of posts, I could scan the last couple of weeks or months of submissions on topics of interest.
But, rather than determining categories first, it might be useful to do rough counts of number of articles on a given topic, posting frequency, etc. You want to make sure that you have critical mass before you split things apart. Given this, as other people have suggested, retaining an “All” category, especially for the front page, seems very useful.
One factor that will be difficult to evaluate is how predictions have interacted with later events. Warnings can (at times) be heeded and risks avoided. Those most difficult cases might be precisely the ones of greatest interest given your aims of shifting humanity’s odds.
A related question is how much impact these predictions had (aside from their accuracy). Things like Limits to Growth or The Population Bomb were extremely influential in spite of their predictive failures (once again, leaving the hypothesis that they served as self-refuting prophecies).
Once you have a better sense of these cases, it will also be interesting to evaluate how responses developed. Were the authors or predictors influential in the resulting actions? You mention at least one case in the email thread where the author was shut out of later efforts due to the prediction (Drexler). I’d be curious to see how the triggers interacted with the resulting movements or responses (if any).
I would hesitate to use failure during “SIAI’s early years” to justify the ease or difficulty of the task. First, the organization seems far more capable now than it was at the time. Second, the landscape has shifted dramatically even in the last few years. Limited AI is continuing to expand and with it discussion of the potential impacts (most of it ill-informed, but still).
While I share your skepticism about pamphlets as such, I do tend to think that MIRI has a greater chance of shifting the odds away from UFAI with persuasion/education rather than trying to build an FAI or doing mathematical research.