Thanks! Validation really, really helps with making more. I hope to, though I’m not sure I can churn them out that quickly since I have to wait for an idea to come along.
shev
That’s a good approach for things where there’s a ‘real answer’ out there somewhere. I think it’s often the case that there’s no good answer. There might be a group of people saying they found a solution, and since there no other solutions they think you should fully buy into theirs and accept whatever nonsensities come packaged with it (for instance, consider how you’d approach the 1+2+3+4+5..=-1/12 proof if you were doing math before calculus existed). I think it’s very important to reject seemingly good answers on their own merits even if there isn’t a better answer around. indeed, this is one of the processes that can lead to finding a better answer.
Well, Numberphile says they appear all over physics. That’s not actually true. They appear in like two places in physics, both deep inside QFT, mentioned here.
QFT uses a concept called renormalization to drop infinities all over the place, but it’s quite sketchy and will probably not appear in whatever final form physics takes when humanity figures it all out. It’s advanced stuff and not, imo, worth trying to understand as a layperson (unless you already know quantum mechanics in which case knock yourself out).
Infinite Summations: A Rationality Litmus Test
If it helps -- I don’t understand what the second half (from the part about Youtube videos onwards) has to do with fighting or optimizing styles.
I also didn’t glean what an ‘optimizing style’ is, so I think the point is lost on me.
Regardless of your laundry list of reasons not to edit your post, you should read “I’m confused about what you wrote” comments, if you believe them to be legitimate criticisms, as a sign that your personal filter on your own writing is not catching certain problems, so you might be highly benefitted by taking it as an opportunity to work on your filter so you can see what we see. Upgrading your filter on your own work leads to systematic improvement across all of your work instead of just improvements to the one we’re talking about.
If you’re worried about responsiveness, you might get further by just asking for more detail before making changes instead of explaining, approximately, “I don’t feel like making changes because I’m not convinced that it’ll be a good use of my time or that I’ll get more responses to make it successful”. (I won’t fault you for lacking motivation, of course not, that’s the battle we all fight—but I also suspect that you’d profit considerably from finding that motivation, since it might lead to systematic improvement of your writing.)
tbh I haven’t figured out how to use Arbital yet. I think it’s lacking in the UX department. I wish the front page discriminated by categories or something, because I find myself not caring about anything I’m reading.
I think you’ve subtly misinterpreted each of the virtues (not that I think in terms the twelve-virtue list is special; they’re just twelve good aspects of rational thought).
The virtues apply to your mental process for parsing and making predictions about the world. They don’t exactly match the real-world usages of these terms.
Consider these in the context of winning a game. Let’s talk about a real-world game with social elements, to make it harder, rather than something like chess. How about “Suppose you’re a small business owner. How do you beat the competition?”
1) Curiosity: refers to the fact that you should be willing to consider new theories, or theories at all instead of intuition. You’re willing to consider, say, that “customers return more often if you make a point to be more polite”. The arational business owner might lose because they think they treat people perfectly fine, and don’t consider changing their behavior.
2-4) Relinquishment/lightness/evenness refers to letting your beliefs be swayed by the evidence, without personal bias. In your example: seeing a woman appear to be cut in half absolutely does not cause you to think she’s actually cut in half. That theory remains highly unlikely. But it does mean that you have to reject theories that don’t allow the appearance of that, and go looking for a more likely explanation. (If you inspect the whole system in detail and come up with nothing, maybe she was actually cut in half! But extraordinary claims require extraordinary evidence, so you better ask everyone you know, and leave some other very extreme theories (such as ‘it’s all CGI’) as valid, as well.)
In my example, the rational business-owner acts more polite to see if it helps retain customers, and correctly (read: mathematically or pseudo-mathematically) interprets the results, being convinced only if they are truly convincing, and unconvinced if they are truly not. The arational business owner doesn’t check, or does and massages the results to fit what they wanted to see, or ignores the results, or disbelieves the results because they don’t match their expectations. And loses.
5) Argument—if you don’t believe that changing your behavior retains customers, and your business partner or employee says they do, do you listen? What if they make a compelling case? The arational owner ignores them, still trusting their own intuition. The rational owner pays attention and is willing to be convinced—or convince them of the opposite, if there’s evidence enough to do so. Argument is on the list because it’s how two fallible but truth-seeking parties find common truth and check reasoning. Not because arguing is just Generally A Good Idea. It’s often not.
6) Empiricism—this is about debating results, not words. It’s not about collecting data. Collecting data might be a good play, or it might not. Depends on the situation. But it’s still in the scope of rationalism to evaluate whether it is or not.
7) Simplicity—this doesn’t mean “pick simple strategies in life”. This means “prefer simple explanations over complex ones”. If you lower your prices and it’s a Monday and you get more sales, you prefer the lower prices explanation over “people buy more on Mondays” because it’s simpler—it doesn’t assume invisible, weird forces; it makes more sense without a more complex model of the world. But you can always pursue the conclusion further if you need to. It could still be wrong.
8) Humility—refers to being internally willing to be fallible. Not to the social trait of humility. Your rational decision making can be humble even if you come across, socially, as the least humble person anyone knows. The humble business owner realizes they’ve made a mistake with a new policy and reverses it because not doing so is a worse play. The arational business owner keeps going when the evidence is against them because they still trust their initial calculation when later evidence disagrees.
9-10) Perfectionism/Precision: if it is true that in social games you don’t need to be perfect, just better than others, then “perfect play” is maximizing P(your score is better than others), not maximizing E(your score). You can always try to play better, but you have to play the right game.
And if committing N resources to something gets a good chance of winning, while committing N+1 gets a better chance but has negative effects on your life in other ways (say, your mental health), then it can be the right play to commit only N. Perfect and precise play is about the larger game of your life, not the current game. The best play in the current game might be imperfect and imprecise, and that’s fine.
11) Scholarship—certainly it doesn’t always make sense when weighed against other things. Until it does. The person on the poverty line who learns more when they have time gains powers the others don’t have. It may unlock doors out that others can’t access. As with everything else, it must be weighed against the other exigencies of their life.
(Also, by the way, I’m not sure what your title means. Maybe rephrase it?)
What would you like to see posts about?
I strongly encourage you to do it. I’m typing up a post right now specifically encouraging people to summarize fields in LW discussion threads as a useful way to contribute, and I think I’m just gonna use this as an example since it’s on my mind..
This is helpful, thanks.
In the “Rationality is about winning” train of thought, I’d guess that anything materially different in post-rationality (tm) would be eventually subsumed into the ‘rationality’ umbrella if it works, since it would, well, win. The model of it as a social divide seems immediately appealing for making sense of the ecosystem.
Any chance you could be bothered to write a post explaining what you’re talking about, at a survey/overview level?
I disagree. The point is that most comments are comments we want to have around, and so we should encourage them. I know that personally I’m unmotivated to comment, and especially to put more than a couple minutes of work into a comment, because I get the impression that no one cares if I do or not.
One general suggestion to everyone: upvote more.
It feels a lot more fun to be involved in this kind of community when participating is rewarded. I think we’d benefit by upvoting good posts and comments a lot more often (based on the “do I want this around?” metric, not the “do I agree with this poster” metric). I know that personally, if I got 10-20 upvotes on a decent post or comment, I’d be a lot more motivated to put more time in to make a good one.
I think the appropriate behavior is, when reading a comment thread, to upvote almost every comment unless you’re not sure it’s positive keeping it around—then downvote if you’re sure it’s bad, or don’t touch it if you’re ambivalent. Or, alternatively: upvote comments you think someone else would be glad to have read (most of them), don’t touch comments that are just “I agree” without meat, and downvote comments that don’t belong or are poorly crafted.
This has the useful property of being an almost zero effort expenditure for the users that (I suspect) would have a larger effect if implemented collectively.
I only heard this phrase “postrationality” for the first time a few days ago, maybe because I don’t keep up with the rationality-blog-metaverse that well, and I really don’t understand it.
All the descriptions I come across when I look for them seem to describe “rationality, plus being willing to talk about human experience too”, but I thought the LW-sphere was already into talking about human experience and whatnot. So is it just “we’re not comfortable talking about human experience on in the rationalist sphere so we made our own sphere”? That is, a cultural divide?
That first link writes “Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.”. Yet I would imagine everyone on LW would be interested in talking about System 1 and how it works and anything interesting we can say about it. So what’s the difference?
- 19 Jan 2017 20:18 UTC; 13 points) 's comment on Thoughts on “Operation Make Less Wrong the single conversational locus”, Month 1 by (
Why do you think there is nothing wrong with your delivery? Multiple people have told you that there was. Is that not evidence that there was? Especially because it’s the community’s opinions that count, not yours?
Rude refers to your method of communicating, not the content of what you said. “I mean that you do not know of the subject, and I do. I can explain it, and you might understand” is very rude, and pointlessly so.
Why do you think you know how much game theory I know?
edit: I edited out the “Is English your first language” bit. That was unnecessarily rude.
I’m not trying to welcome you, I’m trying to explain why your posts were moved to drafts against your will.
I’m not arguing with or talking about Nash’s theory. I’m telling you that your posts are low quality and you need to fix that if you want a good response.
My point in the last paragraph is that you are treating everyone like dirt and coming across as repulsive and egotistical.
“You are incorrect” was referring to “No, you can’t give me feedback.”. Yes, we can. If you’re not receptive to feedback, you should probably leave this site. You’re also going to struggle to socialize with any human beings anywhere with that attitude. Everyone will dislike you.
Keep in mind that it’s irrelevant how smart or right you are if no one wants to talk to you.
How could you possibly know what a random person knows of? Why are you so rude?
Re this post: http://lesswrong.com/lw/ogp/a_proposal_for_a_simpler_solution_to_all_these/
You wrote something provocative but provided no arguments or explanations or examples or anything. That’s why it’s low-quality. It doesn’t matter how good your idea is if you don’t bother to do any legwork to show anyone else. I for one have no why your idea would and don’t care to do work to figure it out because the only reason I have to do work is that you said so.
Also, you might want to tackle something more concrete than “all these difficult observations and problems”. First, it’s definitely true that your ‘solution’ doesn’t solve all the problems. Maybe it helps with some. So which ones? Talk about those.
Also, your writing is exhaustingly vague (“I also value compression and time in this sense, and so I think I can propose a subject that might serve as an “ideal introduction” (I have an accurate meaning for this phrase I won’t introduce atm).”). This is really hard not to lose interest in while reading, and it’s only two random sample sentences.
Re http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/, you’re going to have to do more work to make an interesting discussion. It’s not like “Oh, Flinter, good point, you and (all of us) might have different meanings for ‘ideal’!” is going to happen. It’s on you to show why this is interesting. What made you think the meanings are different? What different results come from that? What’s your definition? What do you think other peoples’ are, and why are they worse?
I agree with Vaniver that those two posts in their current form should have been at least heavily downvoted. Though that doesn’t happen much in practice here since traffic is low. I’m not sure what the removal policy is but I guess it probably applied.
Also, if you keep writing things like “No, you can’t give me feedback. It’s above you. I have come here to explain it to you. I made 3 threads, and they are equally important.” you’re going to be banned for being an ass, no question. You’re also wildly incorrect, but that’s another matter.
This reminds me of an effect I’ve noticed a few times:
I observe that in debates, having two (or more) arguments for your case is usually less effective than having one.
For example, if you’re trying to convince someone (for some reason) that “yes, global warming is real”, you might have two arguments that seem good to you:
scientists almost universally agree that it is real
the graphs of global temperature show very clearly that it is real
But if you actually cite both of these arguments, you start to sound weaker than if you picked one and stuck with it.
With one argument your stance is “look, this is the argument. you either need to accept this argument or show why it doesn’t work—seriously, I’m not letting you get passed this”. And if they find a loophole in your argument (maybe they find a way to believe the data is totally wrong, or something), then you can bust out another argument.
But when you present two arguments at once, it sounds like you’re just fishing for arguments. You’re one of those people who’s got a laundry list of reasons for their side, which is something that everyone on both sides always has (weirdly enough), and your stance has become “look how many arguments there are” instead of “look HOW CONVINCING these arguments are”. So you become easier to disbelieve.
As it happens, there are many good arguments for the same point, in many cases. That’s a common feature of Things That Are True—their truth can be reached in many different ways. But as a person arguing with a human, in a social setting, you often get a lot more mileage out of insisting they fight against one good argument instead of just overwhelming them with how many arguments you’ve got.
The weak arguments mentioned in the linked article multiply this effect considerably. In my mind there’s like, two obvious arguments against theism that you should sit on and not waver from: “What causes you you to think this is correct (over anything else, or just over ‘we don’t know’)” and, if they cite their personal experience / mental phenomenon of religious feelings, “Why do you believe your mental feelings have weight when human minds are so notoriously unreliable?”
Arguments about Jesus’ existence are totally counterproductive—they can only weaken your state, since, after all, who would be convinced by that that wasn’t already convinced by one of the strong arguments?