Fwiw, you’re on my shortlist of researchers whose potential I’m most excited about. I don’t expect my judgment to matter to you (or maybe up to one jot), but I mention it just in case it helps defend against the self-doubt you experience as a result of doing things differently. : )
I don’t know many researchers that well, but I try to find the ones that are sufficiently unusual-in-a-specific-way to make me feel hopefwl about them. And the stuff you write here reflects exactly the unusualness what makes me hopefwl: You actually think inside your own head.
Also, wrt defending against negative social reinforcement signals, it may be sort of epistemically-irrational, but I reinterpret [people disagreeing with me] as positive evidence that I’m just far ahead of them (something I actually believe). Notice how, when a lot of people tell you you’re wrong, that is evidence for both [you are wrong] and [you are so much righter than them that they are unable to recognise how you are right (eg they lack the precursor concepts)].
Also, if you expect [competence at world-saving] to be normally (or lognormally) distributed, you should expect to find large gaps between the competence of the most competent people, simply because the tail flattens out the further out you go. In other words, P(you’re Δ more competent than avg) gets closer to P(you’re Δ+1 more competent than avg) as you increase Δ. This is one way to justify treating [other people not paying attention to you] as evidence for [you’re in a more advanced realm of conversation], but it’s far from the main consideration.
I invite you to meditate on this Mathematical Diagram I made! I believe that your behaviour (wrt the dimension of consequentialist world-saving) is so far to the right of this curve, that most of your peers will think your competence is far below them, unless they patiently have multiple conversations with you. That is, most people’s deference limit is far to the left your true competence.
I’m now going to further destroy the vibes of this comment by saying “poop!” If someone, in their head, notice themselves downvaluing the wisdom of what I previously wrote, merely based on the silly vibes, their cognition is out of whack and they need to see a mechanic. This seems to be a decent litmus test for whether ppl have actual sensors for evidence/gears, or whether they’re just doing (advanced) vibes-based pattern-matching. :P
Can you explain why you use “hopefwl” instead of “hopeful”? I’ve seen this multiple times in multiple places by multiple people but I do not understand the reasoning behind this. This is not a typo, it is a deliberate design decision by some people in the rationality community. Can you please help me undertand.
You have permission to steal my work & clone my generating function. Liberate my vision from its original prison. Obsolescence is victory. I yearn to be surpassed. Don’t credit me if it’s more efficient or better aesthetics to not. Forget my name before letting it be dead weight.
This seems to be a decent litmus test for whether ppl have actual sensors for evidence/gears, or whether they’re just doing (advanced) vibes-based pattern-matching.
If only. Advanced vibes-based pattern-matching is useful when your pattern-matching algorithm is optimized for the distribution you are acting in.
but u don’t know which distribution(s) u are acting in. u only have access to a sample dist, so u are going to underestimate the variance unless u ~Bessel-correct[1] ur intuitions. and it matters which parts of the dists u tune ur sensors for: do u care more to abt sensitivity/specificity wrt the median cluster or sensitivity/specificity wrt the outliers?
ig sufficiently advanced vibes-based pattern-matching collapses to doing causal modelling, so my real-complaint is abt ppl whose vibe-sensors are under-dimensional.
So you seem to be doing a top down reasoning here, going from math to a model of the human brain. I didn’t actually have something like that in mind, and instead was doing bottom up reasoning, where I had a bunch of experiences involving people that gave me a sense for what it means to (1) do vibes-based pattern-matching, and (2) also get a sense for which when you should trust and not trust your intuitions. I really don’t think it is that hard, actually!
Also your Remnote link is broken, and I think it is pretty cool that you use Remnote.
Initially, I thought that your comment did not apply to me at all. I thought that most of the feedback that I get that is negative is actually of the form that the feedback is correct, but it was delivered incorrectly. But now that I think about it, it seems that most of the negative feedback that I get is based on that somebody does not understand what I am saying sufficiently. This might be in large part because I fail to explain it properly.
There are definitely instances though where people did point out big important holes in my reasoning. All of the people who did that were really competent I think. And they did point out things in such a way that I was like “Oh damm, this seems really important! I should have thought about this myself.” But I did not really get negative reinforcement at all from them. They usually pointed it out in a neutral philosopher style, where you talk about the content not the person. I think most of the negative feedback that I am talking about you would get when people don’t differentiate between the content and the person. You want to say “This idea does not work for reason X”. You don’t want to say “Your idea is terrible because you did not write it up well, and even if you had written up well, it seems to really not talk about anything important.”
Interestingly I get less and less negative feedback, on the same things I do. This is probably because of a selection effect where people who like what I do would stick around. However, another major factor seems to be that because I worked on what I do for so long, it gets easier and easier to explain. In the beginning, it is very illegible because it is mostly intuitions. And then as you cash out the intuitions things become more and more legible.
Fwiw, you’re on my shortlist of researchers whose potential I’m most excited about. I don’t expect my judgment to matter to you (or maybe up to one jot), but I mention it just in case it helps defend against the self-doubt you experience as a result of doing things differently. : )
I don’t know many researchers that well, but I try to find the ones that are sufficiently unusual-in-a-specific-way to make me feel hopefwl about them. And the stuff you write here reflects exactly the unusualness what makes me hopefwl: You actually think inside your own head.
Also, wrt defending against negative social reinforcement signals, it may be sort of epistemically-irrational, but I reinterpret [people disagreeing with me] as positive evidence that I’m just far ahead of them (something I actually believe). Notice how, when a lot of people tell you you’re wrong, that is evidence for both [you are wrong] and [you are so much righter than them that they are unable to recognise how you are right (eg they lack the precursor concepts)].
Also, if you expect [competence at world-saving] to be normally (or lognormally) distributed, you should expect to find large gaps between the competence of the most competent people, simply because the tail flattens out the further out you go. In other words, P(you’re Δ more competent than avg) gets closer to P(you’re Δ+1 more competent than avg) as you increase Δ. This is one way to justify treating [other people not paying attention to you] as evidence for [you’re in a more advanced realm of conversation], but it’s far from the main consideration.
I invite you to meditate on this Mathematical Diagram I made! I believe that your behaviour (wrt the dimension of consequentialist world-saving) is so far to the right of this curve, that most of your peers will think your competence is far below them, unless they patiently have multiple conversations with you. That is, most people’s deference limit is far to the left your true competence.
I’m now going to further destroy the vibes of this comment by saying “poop!” If someone, in their head, notice themselves downvaluing the wisdom of what I previously wrote, merely based on the silly vibes, their cognition is out of whack and they need to see a mechanic. This seems to be a decent litmus test for whether ppl have actual sensors for evidence/gears, or whether they’re just doing (advanced) vibes-based pattern-matching. :P
Can you explain why you use “hopefwl” instead of “hopeful”? I’ve seen this multiple times in multiple places by multiple people but I do not understand the reasoning behind this. This is not a typo, it is a deliberate design decision by some people in the rationality community. Can you please help me undertand.
This is an interesting concept. I wish it became a post.
u’r encouraged to write it!
If only. Advanced vibes-based pattern-matching is useful when your pattern-matching algorithm is optimized for the distribution you are acting in.
but u don’t know which distribution(s) u are acting in. u only have access to a sample dist, so u are going to underestimate the variance unless u ~Bessel-correct[1] ur intuitions. and it matters which parts of the dists u tune ur sensors for: do u care more to abt sensitivity/specificity wrt the median cluster or sensitivity/specificity wrt the outliers?
ig sufficiently advanced vibes-based pattern-matching collapses to doing causal modelling, so my real-complaint is abt ppl whose vibe-sensors are under-dimensional.
idk the right math tricks to use are, i just wanted to mk the point that sample dists underestimate the variance of the true dists
also, oops, fixed link. upvoted ur comment bc u complimented me for using RemNote, which shows good taste.
So you seem to be doing a top down reasoning here, going from math to a model of the human brain. I didn’t actually have something like that in mind, and instead was doing bottom up reasoning, where I had a bunch of experiences involving people that gave me a sense for what it means to (1) do vibes-based pattern-matching, and (2) also get a sense for which when you should trust and not trust your intuitions. I really don’t think it is that hard, actually!
Also your Remnote link is broken, and I think it is pretty cool that you use Remnote.
Initially, I thought that your comment did not apply to me at all. I thought that most of the feedback that I get that is negative is actually of the form that the feedback is correct, but it was delivered incorrectly. But now that I think about it, it seems that most of the negative feedback that I get is based on that somebody does not understand what I am saying sufficiently. This might be in large part because I fail to explain it properly.
There are definitely instances though where people did point out big important holes in my reasoning. All of the people who did that were really competent I think. And they did point out things in such a way that I was like “Oh damm, this seems really important! I should have thought about this myself.” But I did not really get negative reinforcement at all from them. They usually pointed it out in a neutral philosopher style, where you talk about the content not the person. I think most of the negative feedback that I am talking about you would get when people don’t differentiate between the content and the person. You want to say “This idea does not work for reason X”. You don’t want to say “Your idea is terrible because you did not write it up well, and even if you had written up well, it seems to really not talk about anything important.”
Interestingly I get less and less negative feedback, on the same things I do. This is probably because of a selection effect where people who like what I do would stick around. However, another major factor seems to be that because I worked on what I do for so long, it gets easier and easier to explain. In the beginning, it is very illegible because it is mostly intuitions. And then as you cash out the intuitions things become more and more legible.