Post Request Thread
This thread is another experiment roughly in the vein of the Boring Advice Repository and the Solved Problems Repository.
There are some topics I’d like to see more LW posts on, but I feel underqualified to post about them relative to my estimate of the most qualified LWer on the topic. I would guess that I am not the only one. I would further guess that there are some LWers who are really knowledgeable about various topics and might like to write about one of them but are unsure which one to choose.
If my guesses are right, these people should be made aware of each other. In this thread, please comment with a request for a LW post (Discussion or Main) on a particular topic. Please upvote such a comment if you would also like to see such a post, and comment on such a comment if you plan on writing such a post. If you leave a writing-plan comment, please edit it once you actually write the post and link to the post so as to avoid duplication of effort in the future.
Let’s see what happens!
Edit: it just occurred to me that it might also be reasonable to comment indicating what topics you’d be interested in writing about and then asking people to tell you which ones they’d like you to write about the most. So try that too!
- 31 May 2014 13:50 UTC; 15 points) 's comment on Against Open Threads by (
I’ve done some public speaking coaching and could outline the four highest-value tips I know, if this would be helpful
Neato. Will do by the end of April.
Done by April 15! Nice.
I’ve coached competitive debating which is a bit more specialist but probably overlaps in the important parts.
It’s probably best if I wait until you’ve written your article so we aren’t duplicating effort, but if you want assistance/collaberation PM me.
I’d like to see posts about (for starters, a primer on) computational complexity. Scott Aaronson (see e.g. Why Philosophers Should Care About Computational Complexity) has suggested that computational complexity should be thought of as “quantitative epistemology”: that is, it quantifies what it is possible for agents with finite computational power to know in reasonable amounts of time. This seems very relevant to the more abstract parts of the Sequences: for example, Bayesian inference is nice and all, but in practice it’s often computationally intractable. If you want answers to your questions in reasonable amounts of time, what do you (or a superintelligent AI) actually do instead of Bayes? But the Sequences themselves don’t really discuss these concerns.
I think that’s partly because the usual computational complexity results aren’t terribly relevant here. Many NP-complete problems, while very difficult to solve exactly, are quite computationally tractable for any constant allowable error. (For example, finding a route in the traveling salesman problem that is within 1% of the best route, or any other error bound, provided you hold the error bound fixed with respect to the problem size.)
So, the fact that a proper Bayesian update on a complex network is intractable is not necessarily relevant, if you can guarantee that your approximate update has bounded error.
Also, my general sense is that bounded-error complexity proofs are much less common, and impossibility proofs for bounded error results even less so. (But it’s not something I’ve studied at all, so my confidence here is low.)
I don’t necessarily want a thorough discussion of particular results so much as an injection of the idea of computational complexity into the LW memeplex. It certainly seems more valuable than, say, quantum mechanics.
It’s more likely for this than for QM that an existing introduction to computational complexity is good enough. It would still take some searching to find a good one to recommend though!
I’m tentatively agreed. However, I’m concerned that a low-quality understanding of the limits of computational complexity would have a negative impact on discussion, because often the complexity results depend on very fragile assumptions that can be violated without significant harm (eg, replacing exact solutions with bounded-error ones).
I agree that a low-quality understanding of things has a negative impact on discussion but don’t see a reason to apply that skepticism more towards a concept that hasn’t been introduced to the LW memeplex than towards concepts that are already in it.
Hm...the set of NP-complete problems where there is a polynomial-time approximation scheme is not that large, and the approximation schemes can sometimes be pretty bad (i.e. getting answers within 1% takes time n^100). Besides, Bayesian inference is at least #P-complete, possibly P-space complete, and a 1% error in the prior blows up to arbitrarily large error in the posterior, so approximation algorithms won’t do anyways.
I’m curious to learn more.
My general impression (having studied computer science, and complexity theory in passing, but being far from an expert) is that bounded-error approximations are not as well studied as precise solutions. I also get the impression that most of the bounded-error complexity results are results for specific approximation algorithms, not proofs about the limits of such algorithms. Are impossibility results for bounded-error approximations common?
In other words,
Do you mean the set where such approximations are known, or the set where they are not known to be impossible?
For the record, this comment has motivated me to find some random introduction paper to computational complexity and slowly read and rewrite in my simplified own words as I go along.
Don’t expect an actual LW post out of this, but (anyone can) feel free to bug/PM me in a few days to inquire on how it went. My hopes are that the mental muddiness around the subject will finally clear away if I try to explain it myself as I read it, even if I’m not explaining it to anyone in particular.
If you haven’t checked it out already, Sipser is a great book. It’s probably the most well-written math textbook I’ve ever read (and arguably it’s not even a math textbook).
Related, it would be interesting to have more about information theory in general.
I’m interested in reading longevity advice.
concerning exercise
concerning supplements
concerning the Paleo diet
Concerning alcohol.
Exercise seems to be full of very-confidently-given advice not grounded in evidence. I am keen to have more energy for doing stuff and would love to know how to figure out what advice to follow.
The state of exercise science is absolutely deplorable. You’re stuck with what coaches who train athletes say after having trained lots of people.
Similar for music and other arts. Despite the lack of science, the successful teachers tend to produce the best students (or they wouldn’t be successful). Yes, this forces each new teacher to start from scratch, but old, good teachers should be fairly trustworthy after years of internalized, natural experiments.
Here are two previous posts on exercise:
Minimum viable workout routine
Weight training
But I agree that this is an area that deserves more study from LW members.
Minimum viable is an interesting phrase.
Recent data data is showing you can do as little as 3 minutes of exercise a week, and get as good insulin response benefits and VO2max benefits as if you’d jogged for hours. (Though, you don’t get the calorie or muscle mass benefits.)
My layman’s understanding is that you’re triggering your body’s “oh #@$! bears live here” function, so it makes sure you can run from bears in the future, even if you don’t encounter them all that often.
That link describes an exercise regime that takes 30 minutes a week, and then later one that takes ~10 minutes a week. (Of which only about 1⁄3 is “high intensity”, but the rest is part of it too.)
Those are still impressively low numbers, but let’s not exaggerate.
My original exposure to Gibala was a later study that’s not listed in the above link, where they were trying to see if the effects would still remain even if the gentle pedaling was removed. They did, but I can’t find a link to it.
Edit: Though the longer ones are more effective. IIRC the most effective one is 50 mins and not 30, even though the above article claims 30.
Reddit’s r/fitness is a good source of mostly fact based information (or at least they have a culture of aggressively attacking “broscience”).
The FAQ is a good place to start. Examine.com is a very good website made by one of the mods, which focuses mainly on the science of nutrition.
While by no means a replacement for such a post, here’s some easily-followed advice for reducing the motivation threshold to initiating exercise.
Exercise makes you warmer—if keen, don’t go out of your way to prevent yourself from becoming cold.
Purchase a removable doorway pull-up bar. Place it outside and next to the door to your most frequently used loo.
Warm yourself up (pull-ups〔anywhere〕, push-ups using the bar〔or any surface whatsoever〕, high knees, running from parked car to destination, etcetera〕. Pep yourself up (when in a negative mood). If not already present, hacking in some vanity, then exercising and looking at yourself might also help with motivation. Try to have fun with it (id est try not to reluctantly force yourself to exercise or guilt yourself for abstaining).
This is an example of what I mean by “very-confidently-given advice not grounded in evidence”. [citation needed].
What evidence do you require? That exercise warms you and peps you up, or that placing a pull-up bar by the loo will decrease exercise initiation motivation threshold?
The latter is a low-cost test one might as well try; do you expect studying factors that influence others’ motivation will benefit you more than giving it a go yourself?
On exercise warming the body.
A good review on endorphins and: exercise; depression; anxiety.
Agreed. You don’t need to see studies verifying that something works for everybody if you can just directly verify that it works for you. A hypothesis affords testing.
I used to use a desktop background of a well-muscled guy; just the image made me want to work out.
I suspect subscriptions to workout magazines featuring the same might also work for those for whom the first work, and is probably more socially acceptable if that is something you concern yourself with.
Among my friends, a big picture of a well-muscled guy would be much more socially acceptable than a workout mag :)
I once promised someone here I’d write a post called “(pure) mathematics for rationalists,” and I tried, but it was way too big a subject and I couldn’t figure out a good way to get started or exactly what I should be writing about. If someone would like to request a substantially narrower version of this post, I would be grateful. Some possible ideas, phrased in the form of questions:
What is pure math, anyway? I sort of know what physics and biology and chemistry are, but I don’t have a concrete visual picture in my head of what a person does when they tell me they’re a mathematician.
What would I get out of studying pure math, e.g. in terms of possible rationality skills?
How do I get started learning pure math if I haven’t done much math before? (I don’t know if I have anything intelligent to say about this, though.)
To what extent is it justifiable to study pure math relative to something more useful, e.g. programming?
I would like to see more posts like Paul Christiano’s My workflow. There are many folks here who are extremely productive (e.g. lukeprog, gwern) and it would be helpful for many of us to have a detailed first-hand account of how such folks manage their time.
I’d be interested in lukeprog’s (or CFAR’s) thoughts on how to implement “tight feedback loops” into every day instrumental rationality (as opposed to running a business or project).
I’d be interested in writing this one. I don’t your divide is a real one; it’s basically the same skill. But it’s still worth talking about in that context.
I’ve often found the examples in some rationality skill discussions difficult to relate to, even though the skill in question seems relevant. The context something is discussed in will make it more or less accessible to different people, even when it’s the same skill and beneficial to all concerned.
EXACTLY.
How can I alter my Big 5 personality traits? (In particular, conscientiousness and extroversion)
Given the seemingly high interest I’m considering researching a full post on this. However, going just off what I remember from psyc classes the short answer is we don’t know and suspect that you can’t alter them much, at least not without doing something drastic. That said, putting yourself in social groups were the desired trait is common/expected looks relatively promising. Additionally, it may be possible to change your behavior in ways that emulate the effects of a big five change without technically changing your personality. In particular the Getting Things Done I learned from CfAR is supposed to boost your effective conscientiousness, and I have observed that to kinda be the case since I adopted it.
Related: Should I alter my Big 5 personality traits?
I realize it’s not the ones you were explicitly asking about, but single dose psilocybin can increase openness long term. Relevant Google search.
A few from 11 Less Wrong Articles I Probably Will Never Have Time to Write: Biases in Charity, Hedonomics, How to be a Happy Consumer.
A tutorial on whatever the current cutting-edge version of UDT is, written at the level of my decision theory FAQ.
In part, so I can understand UDT-consequentialism.
I would like to request some detailed accounts of people dealing with mental health issues from a LW perspective. There’s a lot on this website about hacking your brain’s normal procedures, but not a lot about noticing actual bugs and taking steps to debug effectively and efficiently? Or maybe accounts of mistakes made in the debugging process? This might be too specific for each person to be useful, but it would still be interesting and maybe there are parallels to draw even if specific solutions don’t translate from person to person. It seems like a lot of people in the comments have something to contribute from experience but there are few complete accounts.
EDIT: Here are some specific topics:
A catalog of some common indicators that you’re depressed; maybe also if you have a recurring problem, then what are your warning signs that it’s back?
If you have sufficient evidence that you’re probably depressed, how do you effectively/efficiently trick your faulty hardware to get help? Where should you look for help? How do you fix your feeling after you’ve fixed your thinking?
What is the scope of mental health treatment in terms of money, time/week, time to see results, etc. for various issues? When is it worth undertaking?
What are some mental health symptoms that are known by the medical community but aren’t widely known outside of it that might deserve mainstream attention and maybe de-stigmatizing?
Seconded, and are there any currently existing LW posts on CBT? (I would like one, if not.)
not on LW, but from a highly intelligent “LW affiliate” http://www.spencergreenberg.com/2012/07/break-your-downward-emotional-spiral/
Slight caution, trying to “think your way out of depression” is a common failure mode among high intelligence people who are used to being able to solve problems with their own research. Most mental health problems by their nature require external intervention.
On the other hand...
That’s exactly why I want some tricks that intelligent, depressed people can use on themselves in order stop trying to think themselves better and to get up off their butts to look for help.
Thirded.
I’ve found knowing indicators useful for managing stress, too. I’m sometimes terrible at knowing when I’m stressed, so I also pay attention to whether I’m getting headaches when other people aren’t (meaning it’s not just the weather), and whether I’m checking for bugs in the house more than usual.
The symptoms known in the medical community but not outside it is something I hadn’t thought about before, but it sounds very interesting.
Stress management might also tie in with akrasia, to some extent—some people procrastinate because thinking about the work is stressful.
I want to know how to manage flashbacks or trigger things. :(
Well, why not?
Finishing my analysis of predicting Google shutdowns
[pollid:433]
Doing more work on my Touhou music data compilation (descriptive statistics & regressing against unemployment rates)
[pollid:434]
Compiling cites on correlations of IQ with various desirable things as an example of a genuine halo effect in the real world
[pollid:435]
A rant on how geeks’ self-centered traits and lack of understanding of things like tacit knowledge lead to deprecation of user-generated content, not caring about archiving practices, and blaming the victims
[pollid:436]
Trying to understand the confusing & contradictory literature on the relationship of IQ & Conscientiousness
[pollid:437]
A chapter-by-chapter commentary on Drescher’s Good and Real, optimistically including any commentary online or in academia, and over-optimistically including some Haskell to work with his Quantish universe (a project I’ve mulled for the past few years but never started, interested in whether there’s major LW interest and I should push it way forward)
[pollid:438]
I’m amazed that I’m smack dab in the middle of the consensus.
Could your rant about geeks add bad user interfaces to the list of problems?
Regarding Good and Real, I didn’t manage to work out the EPR experiment. It would be nice to see someone else make sense of it.
Are these life advice threads (like how to get a job in Australia, or how to be poly-amorous) appropriate for Less Wrong? Because if they are, there’s a few procedural knowledge gaps I’d like to fill.
If these gaps are small enough that they can be filled using relatively short sentences, have you tried the Open Thread instead? I have in mind requests for longer posts.
This has happened before, and worked out well for me: I wouldn’t have written this post otherwise, and because that post was well-received I wrote my DA sequence.
I would like someone who had read, understood and internalized the QM sequence to write a TL;DR of it: the points essential for reading further sequences: reductionism, indistinguishability of “particles”, Everett’s worlds etc, with links to the subsequent posts which rely on these concepts. This would help those who are not natural at physics to forge ahead without getting stuck, or worse, skipping the sequence without learning anything from it. My hope is that, if such a distillation is successful, these essential points can be expanded on and motivated by other means, providing people with an alternative.
http://www.youtube.com/watch?v=dEaecUuEqfc
I have no idea why you linked this.
You want to talk about QM? On this link, there is an interesting view.
But shminux didn’t say “I want to talk about QM” or anything close to it. He was asking for something much more specific: a short summary of the bits of Eliezer’s QM posts that are actually necessary to read the other things he’s written that refer back to them.
That video fails to achieve this, on several counts.
It is something like an hour long. Watching it might take longer than reading all Eliezer’s QM posts.
The view of QM it puts forward differs somewhat from Eliezer’s. This makes it distinctly un-useful as a summary of Eliezer’s position to help people understand his other writing.
It picks different topics from the ones Eliezer describes; for a random example, one of the things in Eliezer’s QM sequence that he makes use of elsewhere is the idea that the idea of “one particle as distinct from another identical one” badly fails to match up with how the world actually works; the linked video has nothing to say about this because it’s focusing on different things.
Its purpose is quite different from, and narrower than, Eliezer’s. Eliezer is trying to teach some of the basics of QM to an audience that might not already know it. Garret is trying to tell people who’ve read a bunch of popular writing on QM that the popular writing gives a misleading picture, and to show a bit of the mathematics required to paint a better picture. (That’s also one of Eliezer’s purposes. But it’s not the only one.)
I should add that I didn’t find Garret’s explanation particularly clear or insightful; but opinions on that might vary.
There seem to be many math/physics/CS/mathy PhDs and grad students on LW. I am one. I would very much like to hear your experiences and what activities increased/decreased your productivity and success. In fact, this seems like a good idea for another ‘Advice Repository’.
And you made it happen!
Two that apply to everyone:
Every bit of your work that can go in a computer should go in a version control system repository.
Everything you do that can be generated on your computer should be stored in a format such that regenerating it from original human-editable sources is as easy as typing “make”.
One that applies to everyone who has to write some form of software:
Everything that you write should be accompanied by tests, which should be run regularly, preferably by some “buildbot” system as well as yourself. Otherwise, if you’re writing reusable code it’s all to easy to break use case A while improving use case B or adding use case C. Even when you’re writing use case A, it’s easy to trick yourself into thinking the result is more correct than it is unless you’ve tested it.
There is a world of psychological health in between “This isn’t working!?! It was working fine six months ago! I’m screwed!” and “Oops, time to write another test for git bisect to play with. I should probably increase the line thickness in all my thesis’ graphs by a smidge while waiting for that to finish.”
I don’t have time to flesh this out into a full post. Plus, the more material I add the more narrowly focused on applied-mathematics-on-Linux it would get. I hope the abridged version is still helpful.
For those not aware of it: Google Drive counts as a “version control system” for many document formats. It saves edit histories and allows you to jump back and compare different versions, at least for text documents.
So feel free to use that if it’s just for regular writing or essays! (particularly great for essays and papers, because you also gain the ability to painlessly share it and do collaborative reviews and edits, with features to comment on specific parts of the text)
Being in a position to soon either look into joining, or to start a rationalist group house, I and other potential housemates would very much appreciate a post on how to go about doing either (beyond Shannon’s somewhat minimal post), and/or a post where individuals who have already done so could talk about their experiences, their benefits and pitfalls, etc.
This post may turn into a decent resource.
Prediction market sequence requested
I’d like to see a post about eye strain and eye care. Specifically in regards to time spent in front of a screen, room brightness, breaks, matte versus glossy, glasses, related software, the effect on eye-strain of having multiple monitors, etc.
I would like to see such an article iff failure to follow the advice in it has a moderate probability of leading to long-term damage, not just discomfort that goes away when the problematic activities are halted. Especially so if the damage can occur without a warning more obvious than mild discomfort. My general impression is that most of the activities listed may be uncomfortable, but do not cause lasting damage.
I’d be interested even if the damage is not lasting. I’m much less productive when I have a headache, and I get them easily, so avoiding them is often worth the effort for me.
If anyone is interested, I was considering writing about using game theory in real life combined with luminosity. In particular, I was thinking of writing in an in-depth way about an example (what to do after infidelity in in monogamous relationships) for one-boxing in real life, because it felt like a better intuition pump for me than any other example I’ve heard before.
Newcomb’s problem in real life has already been covered before in these posts: http://lesswrong.com/lw/4yn/realworld_newcomblike_problems/ http://lesswrong.com/lw/1zw/newcombs_problem_happened_to_me/
A practical guide to inferring our own desires or generally discovering/deciding what you want. Rationality is supposedly systematised winning, but it is difficult to win when you don’t know what your objectives are.
If the idea of unemployment primarily caused by technological change is a valid, then it poses a serious risk to social and economic stability. I would like to see a post about possible solutions to technological unemployment. This topic doesn’t seem to get as much mainstream attention although 60 minutes did recently do a segment on it. There is prior discussion on LW, but none that offer potential solutions.
Also, I’d like to know how this community is split on this topic.
Unemployment primarily caused by technological change is a major problem: [pollid:439]
EDIT for clarification
EDIT gwern’s essay on Neo-Luddism here for those interested.
Unsure what “technological employment is true” means.
I attempted to clarify what I mean by editing the post. I meant unemployment and I apologize for being unclear. Also, I didn’t want to say, “Do you agree with the Neo-Luddism?” because I think that you could still believe that structural unemployment caused by technology is a real risk, with out having the philosophy of opposing technology. Perhaps there are solutions that would minimize suffering, if technological unemployment is a significant risk, that does not involve halting progress in software and robotics.
Currently disagree (although it’s most likely a minor source of unemployment right now) but expect it to become a major problem in the future.
Disagree, but I think the position is subtle:
I agree that technology renders jobs obsolete, so in a sense I agree.
However, I think that thus far the “Demand is unlimited” axiom held in some branches of economics has held true in the long term (short term disruptions aside), so I disagree with the implication that technology will result in unemployment in the foreseeable future. (I am open to the possibility of paradigm shifts which render historical evidence irrelevant, I simply do not see at reasonable to assume the future will look the way we currently expect it to look; “Nothing ages faster than yesterday’s tomorrow,” quote source not immediately determinate to me.)
Demand may be unlimited but humans will probably not be the most efficient suppliers of any goods in the future. Once it’s cheaper to run a robotic AGI than to support a human the economy will change dramatically. Personally I think the long term solution is just to ensure everyone owns enough capital to sustain themselves directly (permanent retirement, essentially), but transitioning to that kind of situation and then keeping it sustainable for thousands (billions) of years sounds fairly difficult.
Articles on staying offline more.
On a related note, a common piece of advice for sleeping better is to avoid screen time before bed. I often have trouble finding things I want to or should do before bed that don’t involve a computer (anything involving cleaning is too stressful for me to do before bed). I’d like to see something on reasonable ways to accomplish this.
One possible mechanism for why that works is the bright blue light of computer monitors suppressing melatonin secretion. You can try to reduce the damage of the screen time by using something like Redshift/f.lux?
Commenting just because it took me a few seconds to work out: “a lid” = “avoid”. (I’m guessing the above was entered on a mobile device.) Also “troue” = “trouble” but that one’s more obvious.
Hello LWers, I would deeply appreciate it if someone could write a post on Turing inductive machines as an alternative to Solomonoff induction as a means of inductive inference. Thank you very much!
Positive externalities.
This seems vague / general. Can you be more specific about what you’re looking for?
I’ve said this before, but:
I would like a LW take on feminism, including topics like what feminists are actually doing, whether you should be one, and why.
I’ve seen attempts to expose LW to feminism before, but it normally seems to consist of taking existing feminist content and reposting it here—I’m thinking of a more “local” version.
I realise this is not quite the point of this thread, but it is relevant:
I would like not to have any more posts on PUA or feminism. They are political, unproductive timesinks. The mutual disarmament we had after the flamewar of Summer 2009, where neither side posted, was excellent.
I support your position and will continue to support it via my votes on comments and posts on such subjects. (Such contributions need to be of a particularly high quality for me to upvote them and the overwhelming majority will be downvoted.)
I appreciate the political, unproductive timesink problem. I’m being optimistic—one day we shall triumph and have a productive post!
I might be a bit interested in some non-specific posts on this kind of political stuff, but not specifics which seems to be very unproductive and lead to lots of loud vs quiet dissent/assent.
I have a few objections against the way PUA and feminism are present at LW, but I think that could be fixed by presenting them in a different way.
My problem with PUA is that its discussion does not happen at a separate article, but rather as huge threads within articles about something else. So I am annoyed with the discussion being off-topic, long, repeating the same points over dozen different articles, never reaching any conclusion, threatening to happen again and again forever. Also, even the basic terms are never defined, so people just talk past each other, each one having a completely different understanding what “PUA” means.
This could be solved by having one article, written by someone who understands the topic, but is not mindkilled by it. Someone who could shortly describe the history and evolution of the movement, its most important schools, and what is considered the state of the art today: specific techniques and beliefs, with some evidence that this is what many PUA’s really believe today. So the critics can focus on the core, instead on what some guy said 20 years ago and almost nobody agrees with him today. Having different beliefs properly attributed to different schools, we could perhaps reach a conclusion that some aspect X is unethical, and that X is part of the A’s teachings, but is not present or even is explicitly opposed by B’s teaching. -- This would be much saner than a discussion where “PUA” is de facto defined by its critics as “whatever any man said on internet about getting women, especially if it sounds ofensive”, and the defenders object that “not all PUAs are like that” without being any more specific.
My problem with feminism is… actually, pretty similar. There is never a definition. (Yeah, we are kind of supposed to find it elsewhere. Guess what: most sources about feminism do not provide a definition nor a pointer to a definition. You are supposed to already know it, otherwise you are a bad person.) And I guess if someone finds a definition, it would not be up to the LW rationality standards. Actually, I consider it pretty unlikely that if you have a huge movement, all of its beliefs, without exception, happen to be true and rational. So it would be nice if someone could present a subset of feminism which is rationally defensible, along with the rational defense.
The current sequence of “LW women”… I don’t exactly understand the point of it. There are some opinions and experiences of some women, LW members. I get that. But I guess there is supposed to be some connotation, some conclusion that a sensitive reader is supposed to get after reading them—and I am not sure what it is, and I would prefer it to be more explicit. Because it is difficult to agree or disagree or just analyze something that wasn’t actually said. I mean, the last piece is about horrible things that happened to some women. Okay, but what is the lesson we should take from this? -- “Horrible things happen to people.” “Horrible things happen only to women.” “Horrible things happen to women more frequently than to men.” “LessWrong somehow contributes to these horrible things happening.” “Preventing these horrible things should be given higher priority on LW than raising the sanity waterline, constructing our new AI overlords, etc.”—Note that the data is filtered; if something similar happened to a man, it wouldn’t be included as the part of the series. In absence of a clear message, of course the discussion is chaotic.
So here are some recommendations for people who want to discuss these topics:
Make it a separate article, instead of hijacking discussions in dozen other articles.
Start with a reasonable definition of what are you talking about, just to make sure we use the same words to mean approximately the same things.
Make your point obvious, e.g. by writing a summary at the end of the article.
Don’t announce that you are going to write a series of articles on a topic. Choose one thing, write a standalone article about it. Later, choose another topic, write another article, linking the previous one when necessary.
ie. We need HughRistik to write it.
If this happens, any discussion should immediately taboo feminism. It’s an extremely loaded term that means different things to different people, and I think it would lead to a lot of arguing over definitions.
I think this might be a useful strategy as part of the discussion. I’d like to cover an idea of what people actually mean, though.