Hello Eliezer, glad to have some interaction with you about these vital ideas. I was excited that you bothered to comment on my post, but alas, my excitement was shut down rather prematurely, when I tried to reply to your comment with another post (longer than I want to place in a comment box), but was blocked from doing so.
Apparently my “downvote karma” is −24 for my post “Secret Cosmos: Introduction,” and my total karma is −30 based upon my two previous short posts, when I first opened my LessWrong account (three post total so far, not counting my bio post; I don’t think anyone can downvote a bio page). I guess that means 30 people actually said they did not like my post? It is not clear from the explanation of “rate limit” posted at LessWrong if that is what −30 means. I understand the number −30, but I remain baffled, nonetheless. Thirty people down-karma-voted my posts? Really? No one else ventured to say why they down-karma-voted my posts; no one else left any comment at all. Sort of like, when the KKK burns a cross on someone’s front lawn but cover their faces.
Thank you for making yourself known to me.
Note: I really do not have any issue with 30 people not liking my ideas, but not liking my ideas without saying a word about what it is they disagree with, is not my idea of rational dialog in search of truth. I acknowledge that everyone is welcome, even to a wrong opinion, but common courtesy and etiquette suggest they defend whatever opinion they have with something more fundamental to justify that opinion.
I really do hope I have misinterpreted what is going on at LessWrong, but it all just seems to come down to, some of the folks at LessWrong really cannot tolerate the apparently dangerous idea that certainty could be not only possible, but actually necessary. I hope for all of our best interests, that is not actually the case with LessWrong. If it turns out that LessWrong is really an intolerant platform, I will simply not return there looking to engage its users in any further rational dialogue about truth.
I’m not Eliezer, but thanks for taking the time to read and engage with the post!
The best explanation I can give for the downvotes is that we have a limited amount of space on the front page of the site, and we as a community want to make sure people see content that will be most useful to them. Unfortunately, we simply don’t have enough time to engage closely with every new user on the site, addressing every objection and critique. If we tried, it would get difficult for long-time users to hear each other over the stampede of curious newcomers drawn here recently from our AI posts :) By the way, I haven’t downvoted your post; I don’t think there’s any point once you’ve already gotten this many, and I’d rather give you a more positive impression of the community than add my vote to the pile.
I’m sure you presented your ideas with the best of intentions, but it’s hard to tell which parts of your argument have merit behind them. In particular, you’ve brought up many arguments that have been partially addressed in popular LessWrong posts that most users have already read. Your point about certainty is just one example.
Believe me, LessWrong LOVES thinking about all the ways we could be wrong (maybe we do it a little too much sometimes). We just have a pretty idiosyncratic way we like to frame things. If someone comes along with ideas for how to improve our rationality, they’re much more likely to be received well if they signal that they’re familiar with the entire “LessWrong framework of rationality,” then explain which parts of it they reject and why.
The common refrain for users who don’t know this framework is to “read the Sequences.” This is just a series of blog posts written by Eliezer in the early days of LessWrong. In the Sequences, Eliezer wrote a lot about consciousness, AI, and other topics you brought up—I think you’d find them quite interesting, even if you disagree with them! You could get started at https://www.readthesequences.com. If you can make your way through those, I think you’ll more than deserve the right to post again with new critiques on LessWrong-brand rationality—I look forward to reading them!
Hello Eliezer, glad to have some interaction with you about these vital ideas. I was excited that you bothered to comment on my post, but alas, my excitement was shut down rather prematurely, when I tried to reply to your comment with another post (longer than I want to place in a comment box), but was blocked from doing so.
Apparently my “downvote karma” is −24 for my post “Secret Cosmos: Introduction,” and my total karma is −30 based upon my two previous short posts, when I first opened my LessWrong account (three post total so far, not counting my bio post; I don’t think anyone can downvote a bio page). I guess that means 30 people actually said they did not like my post? It is not clear from the explanation of “rate limit” posted at LessWrong if that is what −30 means. I understand the number −30, but I remain baffled, nonetheless. Thirty people down-karma-voted my posts? Really? No one else ventured to say why they down-karma-voted my posts; no one else left any comment at all. Sort of like, when the KKK burns a cross on someone’s front lawn but cover their faces.
Thank you for making yourself known to me.
Note: I really do not have any issue with 30 people not liking my ideas, but not liking my ideas without saying a word about what it is they disagree with, is not my idea of rational dialog in search of truth. I acknowledge that everyone is welcome, even to a wrong opinion, but common courtesy and etiquette suggest they defend whatever opinion they have with something more fundamental to justify that opinion.
In any case, the reply I have created to your comment and to you post “Infinite Certainty” by Eliezer Yudkowsky, https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/ooypcn7qFzsMcy53R
is now posted on my Substack feed: Can We Agree on Anything for Sure? - by Al Link (substack.com)
I really do hope I have misinterpreted what is going on at LessWrong, but it all just seems to come down to, some of the folks at LessWrong really cannot tolerate the apparently dangerous idea that certainty could be not only possible, but actually necessary. I hope for all of our best interests, that is not actually the case with LessWrong. If it turns out that LessWrong is really an intolerant platform, I will simply not return there looking to engage its users in any further rational dialogue about truth.
I’m not Eliezer, but thanks for taking the time to read and engage with the post!
The best explanation I can give for the downvotes is that we have a limited amount of space on the front page of the site, and we as a community want to make sure people see content that will be most useful to them. Unfortunately, we simply don’t have enough time to engage closely with every new user on the site, addressing every objection and critique. If we tried, it would get difficult for long-time users to hear each other over the stampede of curious newcomers drawn here recently from our AI posts :) By the way, I haven’t downvoted your post; I don’t think there’s any point once you’ve already gotten this many, and I’d rather give you a more positive impression of the community than add my vote to the pile.
I’m sure you presented your ideas with the best of intentions, but it’s hard to tell which parts of your argument have merit behind them. In particular, you’ve brought up many arguments that have been partially addressed in popular LessWrong posts that most users have already read. Your point about certainty is just one example.
Believe me, LessWrong LOVES thinking about all the ways we could be wrong (maybe we do it a little too much sometimes). We just have a pretty idiosyncratic way we like to frame things. If someone comes along with ideas for how to improve our rationality, they’re much more likely to be received well if they signal that they’re familiar with the entire “LessWrong framework of rationality,” then explain which parts of it they reject and why.
The common refrain for users who don’t know this framework is to “read the Sequences.” This is just a series of blog posts written by Eliezer in the early days of LessWrong. In the Sequences, Eliezer wrote a lot about consciousness, AI, and other topics you brought up—I think you’d find them quite interesting, even if you disagree with them! You could get started at https://www.readthesequences.com. If you can make your way through those, I think you’ll more than deserve the right to post again with new critiques on LessWrong-brand rationality—I look forward to reading them!