I’m definitely a crank, but I personally feel like I’m onto something? What’s the appropriate conduct for a crank that knows they’re a crank but still thinks they’ve solved some notorious unsolved problem? Surely it’s something other than “crawl into a hole and die”...
I’m definitely a crank, but I personally feel like I’m onto something?
That quite common for cranks ;)
If the ideas you want to propose are unorthodox, try to write in the most orthodox style in the venue you are addressing.
Look at how posts that have high karma are written and try to write your own post in the same style.
Secondly, you can take your post and tell ChatGPT that you want to post it on LessWrong and ask it what problems people are likely to have with the post.
Well, that’s the problem. I’ve been writing in a combination of my personal voice and my understanding of Eliezer’s voice. Eliezer has enough accumulated Bayes points that he is allowed to use parables and metaphors and such. I do not.
Probably the first, as much as this is the “wrong” answer to your question for the LessWrong crowd.
I would be pretty pissed off if my proposed solution to the alignment problem was attributed to someone who hasn’t gone through what I went through in order to derive it. Especially if that solution ended up being close enough to correct to form a cornerstone of future approaches to the problem.
I’m going to continue to present my ideas in the most appealing package I can devise for them, but I don’t regret posting them to LessWrong in the chaotic fashion that I did.
If you want your proposed solution attributed to you, writing it in a style that people actually want to engage with instead of “your personal voice”, would be the straightforward choice.
Larry McEnerney is great at explaining what writing is about.
Well, perhaps we can ask, what is reading about? Surely it involves reading through clearly presented arguments and trying to understand the process that generated them, and not presupposing any particular resolution to the question “is this person crazy” beyond the inevitable and unenviable limits imposed by our finite time on Earth.
There’s a lot of material to read. Part of being good at reading is spending one’s attention in the most effective way and not wasting it with low-value content.
That’s fair, and I need to do a better job of building on-ramps for different readers. My most recent shortform is an attempt to build such an on-ramp for the LessWrong memeplex.
I agree that that is an extremely relevant post to my current situation and general demeanor in life.
I guess I’m not willing to declare the alignment problem unsolvable just because it’s difficult, and I’m not willing to let anyone else claim to have solved it before I get to claim that I’ve solved it? And that inherently makes me a crackpot until such time as consensus reality catches up with me or I change my mind about my most deeply held values and priorities.
Are there any other posts from the sequences that you think I should read?
I guess I’m not willing to declare the alignment problem unsolvable just because it’s difficult
I’m not aware of anyone who has declared the alignment problem to be unsolvable. I have read a few people speculate that it MIGHT be unsolvable, but no real concrete attempts to show it is (though I haven’t kept up with the literature as much as some others so perhaps I missed something)
I’m not willing to let anyone else claim to have solved it before I get to claim that I’ve solved it
This just seems weird. If someone else solves alignment, why would you “not let them claim to have solved it”? And how would you do that? By just refusing to recognize it even if it goes against all the available evidence and causes people to take you less seriously?
No, I would do it by rushing to publish my work before it’s been cleaned up enough to be presentable. Scientist have done this throughout history have rushed to avoid getting scooped, and to scoop others. I do not wish to be the Rosalind Franklin of the alignment problem.
Why do you care so much about being first out the door, so much so that your willing to look like a clown/crackpot along the way?
The existing writings, from what I can see, don’t exactly portray the writer as a bonafide genius, so at best folks will perceive you as a moderately above average person with some odd tendencies/preferences, who got unusually lucky.
And then promptly forget about it when the genuine geniuses publish their highly credible results.
And that’s assuming it is even solvable, which seems to be increasingly not the case.
No, it’s not going to get you credit. That’s not how credit works in science or anywhere. It goes not to the first who had the idea, but the first who successfully popularized it. That’s not fair, but that’s how it works.
You can give yourself credit or try to argue for it based on evidence of early publication, but would delaying another day to polish your writing a little matter for being first out the door?
I’m sympathetic to your position here, I’ve struggled with similar questions, including wondering why I’m getting downvoted even after trying to get my tone right, and having what seem to me like important, well-explained contributions.
Recognizing that the system isn’t going to be completely fair or efficient and working with it instead of against it is unfortunate, but it’s the smart thing to do in most situations. Attempts to work outside of the existing system only work when they’re either carefully thought out and based on a thorough understanding of why the system works as it does, or they’re extremely lucky.
Historically, I have been extremely, extremely good at delaying publication of what I felt were capabilites-relevant advances, for essentially Yudkowskyan doomer reasons. The only reward I have earned for this diligence is to be treated like a crank when I publish alignment-related research because I don’t have an extensive history of public contribution to the AI field.
Here is my speculation of what Q* is, along with a github repository that implements a shitty version of it, postdated several months.
Ask yourself: do you want personal credit, or do you want to help save the world?
Anyway, don’t get discouraged, just learn from those answers and keep writing about those ideas. And learning about related ideas so you can reference them and thereby show what’s new in your ideas. You only got severely downvoted on one, don’t let it get to you any more than you can help.
If the ideas are strong, they’ll win through if you keep at it.
I wouldn’t say I really do satire? My normal metier is more “the truth, with jokes”. If I’m acting too crazy to be considered a proper rationalist, it’s usually because I am angry or at least deeply annoyed.
I think “read the sequences” is an incredibly unhelpful suggestion. It’s an unrealistic high bar for entry. The sequences are absolutely massive. It’s like saying “read the whole bible before talking to anyone at church”, but even longer. And many newcomers already understand the vast bulk of that content. Even the more helpful selected sequences are two thousand pages.
We need a better introduction to alignment work, LessWrong community standards, and rationality. Until we have it, we need to personally be more helpful to aspiring community members.
If someone is too wrong, and explicitly refuses to update on feedback, it may be impossible to give them a short condensed argument.
(If someone said that Jesus was a space lizard from another galaxy who came to China 10000 years ago, and then he publicly declared that he doesn’t actually care whether God actually exists or not… which specific chapter of the Bible would you recommend him to read to make him understand that he is not a good fit for a Christian web forum? Merely using the “Jesus” keyword is not enough, if everything substantial is different.)
Well, yes. I guess it’s more of an… expression of frustration. Like telling the space-lizard-Jesus guy: “Dude, have you ever read the Bible?” You don’t expect he did, and yes that is the reason why he says what he says… but you also do not really expect him to read it now.
(Then he asks you for help at publishing his own space Bible.)
Well what if he bets a significant amount of money at 2000:1 odds that the Pope will officially add his space Bible to the real Bible as a third Testament after the New Testament within the span of a year?
What if he records a video of himself doing Bible study? What if he offers to pay people their currently hourly rate to watch him do Bible study?
I guess the thrust of my questions here is, at what point do you feel that you become the dick for NOT helping him publish his own space Bible? At what point are you actively impeding new religious discoveries by failing to engage?
For real, literal Christianity, I think there’s no amount of cajoling or argumentation that could lead a Christian to accept the new space Bible. For one thing, until the Pope signs off on it, they would no longer be Christian if they did.
Does rationalism aspire to be more than just another provably-false religion? What would ET Jaynes say about people who fail to update on new evidence?
Since you explicitly asked for feedback regarding your downvotes, the “oh, woe is me, my views are so unpopular and my posts keep getting downvoted” lamentations you’ve included in a few of your posts get grating, and might end up self-fulfilling. If you’re saying unpopular things, my advice is to own it, and adopt the “haters gonna hate” attitude: ignore the downvotes completely.
(To be clear, we do have automatic regimes that restrict posting and commenting privileges for downvoted users, since we can’t really keep up with the moderation load otherwise, so there are some limits to your ability to ignore them)
Counter, I think the restriction is too loose. There are enough people out there making posts that the real issue is lack of quality, not lack of quantity.
The problem is a long time contributor can be heavily downvoted once and become heavily rate limited, and then it relies on them earning back their points to be able to post again. I wouldn’t say such a thing is necessarily terrible, but it seems to me to have driven away a number of people I was optimistic about who were occasionally saying something many people disagree with and getting heavily downvoted.
I’m not sure I understand this concern. For someone who posts a burst of unpopular (whether for the topic, for the style, or for other reasons) posts, rate limiting seems ideal. It prevents them from digging deeper, while still allowing them to return to positive contribution, and to focus on quality rather than quantity.
I understand it’s annoying to the poster (and I’ve been caught and annoyed myself), but I haven’t seen any that seem like a complete error. I kind of expect the mods would intervene if it were a clear problem, but I also expect the base intervention is advice to slow down.
So yes, “quite a few”, especially if upvotes are scarcer than downvotes for the poster. But remember, during this time, they ARE posting, just not at the quantity that wasn’t working.
The real question is whether the poster actually changes behavior based on the downvotes and throttling. I do think it’s unfortunate that some topics could theoretically be good for LW, but end up not working. I don’t think it’s problematic that many topics and presentation styles are not possible on LW.
My understanding of the current situation with me is that I am not in fact rate-limited purely by automatic processes currently, but rather by some sort of policy decision on the part of LessWrong’s moderators.
Which is fine, I’ll just continue to post my alignment research on my substack, and occasionally dump linkposts to them in my shortform, which the mods have allowed me continued access to.
Get good at finding holes in your ideas. Become appropriately uncertain. Distill your points to increase signal to noise ratio in your communication. Ask for feedback early and often.
Less wrong is not very welcoming for those who are trying to improve their ideas but are currently saying nonsense because the voting system automatically mutes those who say unpopular things very quickly. I’ve unfortunately encountered this quite a bit myself. Try putting warnings at the beginning that you know things are bad, request that you be downvoted to but not below zero so you can still post, etc. explicitly invite harsh negative feedback frequently. Put everything in one post and keep it short. Avoid aesthetic trappings of academia and just say what you mean concisely. If you’re going to be clever with language to make a point, explain the point without being clever with language too or many won’t get it.
From the little I’ve understood it seems like you’re gesturing in the direction of the boundaries sequence but your SNR is terrible right now and I’m not sure what you’re getting at at all.
request that you be downvoted to but not below zero so you can still post
This would get an automatic downvote from me.
If you get downvoted, write differently, not more of the same plus a disclaimer that you know that this is not what people want but you are going to write more of it regardless. From my perspective, the disclaimer just makes it worse, because you can no longer claim ignorance.
it’s a workaround for a broken downvote system. “don’t like my post, legit. please be aware the downvote system will ban me if I get heavily downvoted”.
Can you clarify what part of the downvote system is broken? If someone posts multiple things that get voted below zero, that indicates to me that most voters don’t want to see more of that on LW. Are you saying it means something else?
I do wish there were agreement indicators on top-level posts, so it could be much clearer to remind people “voting is about whether you think this is good to see on LW, agreement is about the specific arguments”. But even absent that, I don’t see very much below-zero post scores that surprise me or I think are strongly incorrect. If I did, I somewhat expect the mods would override a throttle.
I once got quite upset when someone posted something anti-trans. because I wrote an angry reply, I got heavily downvoted. as a result, I was only able to post once a day for several months, heavily limiting my ability to contribute. perhaps this is the intended outcome of the system; but I think there ought to be a better curve than that. perhaps directly related to time, rather than number of posts—I had to make an effort to make a trivial post regularly so I’d be able to make a spurt of specific posts when I had something I wanted to comment on.
Yeah, I can see how a single highly-downvoted bad comment can outweigh a lot of low-positive good comments. I do wish there were a way to reset the limit in the cases where a poster acknowledges the problem and agrees they won’t do it again (in at least the near-term). Or perhaps a more complex algorithm that counts posts/comments rather than just votes.
And I’ve long been of the opinion that strong votes are harmful, in many ways.
Fellow crank here. You might be making mistakes which aren’t obvious, but which most people on here know of because they’ve read the sequences. So if you’re making a mistake which has already been addressed on here before, that might annoy readers.
I have a feeling that you like teaching more than learning, and writing more than reading. I’m the same. I comment on things despite not being formally educated in them or researching them in depth, I’m not very conscientious, I don’t like putting in effort. But seeing other people confused about things that I feel like I understood long ago irks me, so I can’t help but voice my own view (which is rarely understood by anyone).
By the way, there is one mistake you might be committing. If you come up with a theory, then you can surely find evidence that it’s correct, by finding cases which are explained by this theory of yours. But it’s not enough to come up with a theory which fits every true case, true positives are one thing, false positives another, true negatives yet another, and false negatives yet another. A theory should be bounded from all sides, it should work in both the forward and the backwards direction.
For instance, “Sickness is caused by bad smells” sound correct. You can even verify a solid correlation. But if you try, I bet you can think of cases in which bad smells have not caused sickness. There’s also cases where sickness was not caused by bad smells. Furthermore, germ theory more correctly covers true cases while rejecting false ones, so it’s more fitting than Miasma theory. When you feel like you’re onto something, I recommend putting your theory through more checks. If you’re aware of all of this already, then I apologize.
Lastly, I admire that you followed through on the payments, and I enjoy seeing people (you) think for themselves and share their ideas rather than being cowardly and calling their cowardice “humility”.
Thank you for this anwer. I agree that I have not visibly been putting in the work to make falsifiable predictions relevant to the ethicophysics. These can indeed be made in the ethicophysics, but they’re less predictions and more “self-fulfilling prophecies” that have the effect of compelling the reader to comply with a request to the extent that they take the request seriously. Which, in plain language, is some combination of wagers, promises, and threats.
And it seems impolite to threaten people just to get them to read a PDF.
I think the answer is “learn the field”. That’s what makes you not-a-crank. And that’s not setting a high bar; learning a little goes a long way.
The problem with being a crank is that professionals don’t have time to evaluate the ideas of every crank. There are a lot, and it’s harder to understand them because they don’t use the standard concepts to explain their ideas.
In my experience, cranks (at least physics cranks) realize that university email addresses are often public and send emails detailing their breakthrough/insight to as many grad students as they can. These emails never get replies, but (and this might surprise you) often get read. This is not a stupid strategy: if your work is legit (unlikely, but not inconceivable), this will make it known.
Except that Ramanujan sent letters (of course) rather than emails; the difference is important because writing letters to N people is a lot more work than sending emails to N people, so getting a letter from someone is more evidence that they’re willing to put some effort into communicating with you than getting an email from them is.
Yes, this is a valid and correct point. The observed and theoretical Nash Equilibrium of the Wittgensteinian language game of maintaining consensus reality is indeed not to engage with cranks who have not Put In The Work in a way that is visible and hard-to-forge.
It’s worth being clear in your mind the distinction between “put in the work” and “ideas that are both clear and correct (or at least promising)”. They’re related, especially work and the clarity of what the idea is, but not the same.
I’m definitely a crank, but I personally feel like I’m onto something? What’s the appropriate conduct for a crank that knows they’re a crank but still thinks they’ve solved some notorious unsolved problem? Surely it’s something other than “crawl into a hole and die”...
That quite common for cranks ;)
If the ideas you want to propose are unorthodox, try to write in the most orthodox style in the venue you are addressing.
Look at how posts that have high karma are written and try to write your own post in the same style.
Secondly, you can take your post and tell ChatGPT that you want to post it on LessWrong and ask it what problems people are likely to have with the post.
Well, that’s the problem. I’ve been writing in a combination of my personal voice and my understanding of Eliezer’s voice. Eliezer has enough accumulated Bayes points that he is allowed to use parables and metaphors and such. I do not.
What do you care more about? Getting to write in “your personal voice” or getting your ideas well received?
Probably the first, as much as this is the “wrong” answer to your question for the LessWrong crowd.
I would be pretty pissed off if my proposed solution to the alignment problem was attributed to someone who hasn’t gone through what I went through in order to derive it. Especially if that solution ended up being close enough to correct to form a cornerstone of future approaches to the problem.
I’m going to continue to present my ideas in the most appealing package I can devise for them, but I don’t regret posting them to LessWrong in the chaotic fashion that I did.
If you want your proposed solution attributed to you, writing it in a style that people actually want to engage with instead of “your personal voice”, would be the straightforward choice.
Larry McEnerney is great at explaining what writing is about.
Well, perhaps we can ask, what is reading about? Surely it involves reading through clearly presented arguments and trying to understand the process that generated them, and not presupposing any particular resolution to the question “is this person crazy” beyond the inevitable and unenviable limits imposed by our finite time on Earth.
There’s a lot of material to read. Part of being good at reading is spending one’s attention in the most effective way and not wasting it with low-value content.
That’s fair, and I need to do a better job of building on-ramps for different readers. My most recent shortform is an attempt to build such an on-ramp for the LessWrong memeplex.
Maybe, “try gaining skill somewhere with lower standards”?
Well, I have my substack. They let me post whatever I want whenever I want.
I think a generic answer is “read the sequences”? Here’s a fun one
https://www.lesswrong.com/posts/qRWfvgJG75ESLRNu9/the-crackpot-offer
I agree that that is an extremely relevant post to my current situation and general demeanor in life.
I guess I’m not willing to declare the alignment problem unsolvable just because it’s difficult, and I’m not willing to let anyone else claim to have solved it before I get to claim that I’ve solved it? And that inherently makes me a crackpot until such time as consensus reality catches up with me or I change my mind about my most deeply held values and priorities.
Are there any other posts from the sequences that you think I should read?
I’m not aware of anyone who has declared the alignment problem to be unsolvable. I have read a few people speculate that it MIGHT be unsolvable, but no real concrete attempts to show it is (though I haven’t kept up with the literature as much as some others so perhaps I missed something)
This just seems weird. If someone else solves alignment, why would you “not let them claim to have solved it”? And how would you do that? By just refusing to recognize it even if it goes against all the available evidence and causes people to take you less seriously?
No, I would do it by rushing to publish my work before it’s been cleaned up enough to be presentable. Scientist have done this throughout history have rushed to avoid getting scooped, and to scoop others. I do not wish to be the Rosalind Franklin of the alignment problem.
Why do you care so much about being first out the door, so much so that your willing to look like a clown/crackpot along the way?
The existing writings, from what I can see, don’t exactly portray the writer as a bonafide genius, so at best folks will perceive you as a moderately above average person with some odd tendencies/preferences, who got unusually lucky.
And then promptly forget about it when the genuine geniuses publish their highly credible results.
And that’s assuming it is even solvable, which seems to be increasingly not the case.
Well, I’ll just have to continue being first out the door, then, won’t I?
No, it’s not going to get you credit. That’s not how credit works in science or anywhere. It goes not to the first who had the idea, but the first who successfully popularized it. That’s not fair, but that’s how it works.
You can give yourself credit or try to argue for it based on evidence of early publication, but would delaying another day to polish your writing a little matter for being first out the door?
I’m sympathetic to your position here, I’ve struggled with similar questions, including wondering why I’m getting downvoted even after trying to get my tone right, and having what seem to me like important, well-explained contributions.
Recognizing that the system isn’t going to be completely fair or efficient and working with it instead of against it is unfortunate, but it’s the smart thing to do in most situations. Attempts to work outside of the existing system only work when they’re either carefully thought out and based on a thorough understanding of why the system works as it does, or they’re extremely lucky.
Historically, I have been extremely, extremely good at delaying publication of what I felt were capabilites-relevant advances, for essentially Yudkowskyan doomer reasons. The only reward I have earned for this diligence is to be treated like a crank when I publish alignment-related research because I don’t have an extensive history of public contribution to the AI field.
Here is my speculation of what Q* is, along with a github repository that implements a shitty version of it, postdated several months.
https://bittertruths.substack.com/p/what-is-q
Same here.
Ask yourself: do you want personal credit, or do you want to help save the world?
Anyway, don’t get discouraged, just learn from those answers and keep writing about those ideas. And learning about related ideas so you can reference them and thereby show what’s new in your ideas. You only got severely downvoted on one, don’t let it get to you any more than you can help.
If the ideas are strong, they’ll win through if you keep at it.
Oh come on, I was on board with your other satire but no rationalist actually says this sort of thing
I wouldn’t say I really do satire? My normal metier is more “the truth, with jokes”. If I’m acting too crazy to be considered a proper rationalist, it’s usually because I am angry or at least deeply annoyed.
I think “read the sequences” is an incredibly unhelpful suggestion. It’s an unrealistic high bar for entry. The sequences are absolutely massive. It’s like saying “read the whole bible before talking to anyone at church”, but even longer. And many newcomers already understand the vast bulk of that content. Even the more helpful selected sequences are two thousand pages.
We need a better introduction to alignment work, LessWrong community standards, and rationality. Until we have it, we need to personally be more helpful to aspiring community members.
See The 101 Space You Will Always Have With You for a thorough and well-argued version of this argument.
If someone is too wrong, and explicitly refuses to update on feedback, it may be impossible to give them a short condensed argument.
(If someone said that Jesus was a space lizard from another galaxy who came to China 10000 years ago, and then he publicly declared that he doesn’t actually care whether God actually exists or not… which specific chapter of the Bible would you recommend him to read to make him understand that he is not a good fit for a Christian web forum? Merely using the “Jesus” keyword is not enough, if everything substantial is different.)
I agree. But telling them to read the sequences is still pointless.
Well, yes. I guess it’s more of an… expression of frustration. Like telling the space-lizard-Jesus guy: “Dude, have you ever read the Bible?” You don’t expect he did, and yes that is the reason why he says what he says… but you also do not really expect him to read it now.
(Then he asks you for help at publishing his own space Bible.)
Well what if he bets a significant amount of money at 2000:1 odds that the Pope will officially add his space Bible to the real Bible as a third Testament after the New Testament within the span of a year?
What if he records a video of himself doing Bible study? What if he offers to pay people their currently hourly rate to watch him do Bible study?
I guess the thrust of my questions here is, at what point do you feel that you become the dick for NOT helping him publish his own space Bible? At what point are you actively impeding new religious discoveries by failing to engage?
For real, literal Christianity, I think there’s no amount of cajoling or argumentation that could lead a Christian to accept the new space Bible. For one thing, until the Pope signs off on it, they would no longer be Christian if they did.
Does rationalism aspire to be more than just another provably-false religion? What would ET Jaynes say about people who fail to update on new evidence?
I agree that my suggestion was not especially helpful.
And now I have my own Sequence! I predict that it will be as unpopular as the rest of my work.
Since you explicitly asked for feedback regarding your downvotes, the “oh, woe is me, my views are so unpopular and my posts keep getting downvoted” lamentations you’ve included in a few of your posts get grating, and might end up self-fulfilling. If you’re saying unpopular things, my advice is to own it, and adopt the “haters gonna hate” attitude: ignore the downvotes completely.
Oh, I do.
(To be clear, we do have automatic regimes that restrict posting and commenting privileges for downvoted users, since we can’t really keep up with the moderation load otherwise, so there are some limits to your ability to ignore them)
I think your automatic restriction is currently too tight. I would suggest making it decay faster.
Agreed. I haven’t suffered from this but the limits seem pretty extreme right now.
Counter, I think the restriction is too loose. There are enough people out there making posts that the real issue is lack of quality, not lack of quantity.
The problem is a long time contributor can be heavily downvoted once and become heavily rate limited, and then it relies on them earning back their points to be able to post again. I wouldn’t say such a thing is necessarily terrible, but it seems to me to have driven away a number of people I was optimistic about who were occasionally saying something many people disagree with and getting heavily downvoted.
I’m not sure I understand this concern. For someone who posts a burst of unpopular (whether for the topic, for the style, or for other reasons) posts, rate limiting seems ideal. It prevents them from digging deeper, while still allowing them to return to positive contribution, and to focus on quality rather than quantity.
I understand it’s annoying to the poster (and I’ve been caught and annoyed myself), but I haven’t seen any that seem like a complete error. I kind of expect the mods would intervene if it were a clear problem, but I also expect the base intervention is advice to slow down.
the rate limiting doesn’t decay until they’ve been upvoted for quite a number of additional comments afterwards.
https://www.lesswrong.com/posts/hHyYph9CcYfdnoC5j/automatic-rate-limiting-on-lesswrong claims it’s net karma on last 20 posts (and last 20 within a month). And total karma, but that’s not an issue for a long-term poster who’s just gotten sidetracked to an unpopular few posts.
So yes, “quite a few”, especially if upvotes are scarcer than downvotes for the poster. But remember, during this time, they ARE posting, just not at the quantity that wasn’t working.
The real question is whether the poster actually changes behavior based on the downvotes and throttling. I do think it’s unfortunate that some topics could theoretically be good for LW, but end up not working. I don’t think it’s problematic that many topics and presentation styles are not possible on LW.
My understanding of the current situation with me is that I am not in fact rate-limited purely by automatic processes currently, but rather by some sort of policy decision on the part of LessWrong’s moderators.
Which is fine, I’ll just continue to post my alignment research on my substack, and occasionally dump linkposts to them in my shortform, which the mods have allowed me continued access to.
Perhaps it is about right, then.
The votes on this comment imply long vol on LW rate limiting.
I am painfully aware of this. I get awfully bored when I’m rate-limited.
And now I am officially rate-limited to one post per week. Be sure to go to my substack if you are curious about what I am up to.
I have read the sequences. Not all of them, because, who has time.
Here is a video of me reading the sequences (both Eliezer’s and my own):
https://bittertruths.substack.com/p/semi-adequate-equilibria
Get good at finding holes in your ideas. Become appropriately uncertain. Distill your points to increase signal to noise ratio in your communication. Ask for feedback early and often.
Less wrong is not very welcoming for those who are trying to improve their ideas but are currently saying nonsense because the voting system automatically mutes those who say unpopular things very quickly. I’ve unfortunately encountered this quite a bit myself. Try putting warnings at the beginning that you know things are bad, request that you be downvoted to but not below zero so you can still post, etc. explicitly invite harsh negative feedback frequently. Put everything in one post and keep it short. Avoid aesthetic trappings of academia and just say what you mean concisely. If you’re going to be clever with language to make a point, explain the point without being clever with language too or many won’t get it.
From the little I’ve understood it seems like you’re gesturing in the direction of the boundaries sequence but your SNR is terrible right now and I’m not sure what you’re getting at at all.
This would get an automatic downvote from me.
If you get downvoted, write differently, not more of the same plus a disclaimer that you know that this is not what people want but you are going to write more of it regardless. From my perspective, the disclaimer just makes it worse, because you can no longer claim ignorance.
it’s a workaround for a broken downvote system. “don’t like my post, legit. please be aware the downvote system will ban me if I get heavily downvoted”.
What you see as a broken system, I see as a system working exactly as intended.
Should we keep any nonsense on LW front page just because the author asked us nicely?
Can you clarify what part of the downvote system is broken? If someone posts multiple things that get voted below zero, that indicates to me that most voters don’t want to see more of that on LW. Are you saying it means something else?
I do wish there were agreement indicators on top-level posts, so it could be much clearer to remind people “voting is about whether you think this is good to see on LW, agreement is about the specific arguments”. But even absent that, I don’t see very much below-zero post scores that surprise me or I think are strongly incorrect. If I did, I somewhat expect the mods would override a throttle.
I once got quite upset when someone posted something anti-trans. because I wrote an angry reply, I got heavily downvoted. as a result, I was only able to post once a day for several months, heavily limiting my ability to contribute. perhaps this is the intended outcome of the system; but I think there ought to be a better curve than that. perhaps directly related to time, rather than number of posts—I had to make an effort to make a trivial post regularly so I’d be able to make a spurt of specific posts when I had something I wanted to comment on.
Yeah, I can see how a single highly-downvoted bad comment can outweigh a lot of low-positive good comments. I do wish there were a way to reset the limit in the cases where a poster acknowledges the problem and agrees they won’t do it again (in at least the near-term). Or perhaps a more complex algorithm that counts posts/comments rather than just votes.
And I’ve long been of the opinion that strong votes are harmful, in many ways.
Agree that it should be time-based rather than karma-based.
I’m currently on a very heavy rate limit that I think is being manually adjusted by the LessWrong team.
Fellow crank here. You might be making mistakes which aren’t obvious, but which most people on here know of because they’ve read the sequences. So if you’re making a mistake which has already been addressed on here before, that might annoy readers.
I have a feeling that you like teaching more than learning, and writing more than reading. I’m the same. I comment on things despite not being formally educated in them or researching them in depth, I’m not very conscientious, I don’t like putting in effort. But seeing other people confused about things that I feel like I understood long ago irks me, so I can’t help but voice my own view (which is rarely understood by anyone).
By the way, there is one mistake you might be committing. If you come up with a theory, then you can surely find evidence that it’s correct, by finding cases which are explained by this theory of yours. But it’s not enough to come up with a theory which fits every true case, true positives are one thing, false positives another, true negatives yet another, and false negatives yet another. A theory should be bounded from all sides, it should work in both the forward and the backwards direction.
For instance, “Sickness is caused by bad smells” sound correct. You can even verify a solid correlation. But if you try, I bet you can think of cases in which bad smells have not caused sickness. There’s also cases where sickness was not caused by bad smells. Furthermore, germ theory more correctly covers true cases while rejecting false ones, so it’s more fitting than Miasma theory. When you feel like you’re onto something, I recommend putting your theory through more checks. If you’re aware of all of this already, then I apologize.
Lastly, I admire that you followed through on the payments, and I enjoy seeing people (you) think for themselves and share their ideas rather than being cowardly and calling their cowardice “humility”.
Thank you for this anwer. I agree that I have not visibly been putting in the work to make falsifiable predictions relevant to the ethicophysics. These can indeed be made in the ethicophysics, but they’re less predictions and more “self-fulfilling prophecies” that have the effect of compelling the reader to comply with a request to the extent that they take the request seriously. Which, in plain language, is some combination of wagers, promises, and threats.
And it seems impolite to threaten people just to get them to read a PDF.
I think the answer is “learn the field”. That’s what makes you not-a-crank. And that’s not setting a high bar; learning a little goes a long way.
The problem with being a crank is that professionals don’t have time to evaluate the ideas of every crank. There are a lot, and it’s harder to understand them because they don’t use the standard concepts to explain their ideas.
In my experience, cranks (at least physics cranks) realize that university email addresses are often public and send emails detailing their breakthrough/insight to as many grad students as they can. These emails never get replies, but (and this might surprise you) often get read. This is not a stupid strategy: if your work is legit (unlikely, but not inconceivable), this will make it known.
Right, this is how Ramanujan was discovered.
Except that Ramanujan sent letters (of course) rather than emails; the difference is important because writing letters to N people is a lot more work than sending emails to N people, so getting a letter from someone is more evidence that they’re willing to put some effort into communicating with you than getting an email from them is.
Yes, this is a valid and correct point. The observed and theoretical Nash Equilibrium of the Wittgensteinian language game of maintaining consensus reality is indeed not to engage with cranks who have not Put In The Work in a way that is visible and hard-to-forge.
It’s worth being clear in your mind the distinction between “put in the work” and “ideas that are both clear and correct (or at least promising)”. They’re related, especially work and the clarity of what the idea is, but not the same.