I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups?
That is really a weak point I made there. It was not meant to be an argument but just a guess. I also don’t want to accuse people of being overly interested to create a community in and of itself rather than a community with the overall aim to seek truth. I apologize for hinting at that possibility.
Let me expand on how I came to make that statement in the first place. I have always been more than a bit skeptical about the reputation system employed on lesswrong. I think that it might unconsciously lead people to agree because even slight disagreement might accumulate to negative karma over time. And even if, on some level, you don’t care about karma, each time you are downvoted it gives you a negative incentive not to voice that opinion the next time or to change how you portray it. I noticed that I myself, although I believe not to care much about my rank within this community, become increasingly reluctant to say something that I know will lead to negative karma. This of course works insofar as it maximizes the content the collective intelligence of all people on lesswrong is interested in. But that content might be biased and to some extent dishonest. Are we really good at collectively deciding what we want to see more of, just by clicking two buttons that increases a reward number? I am skeptical.
Now if you take into account my, admittedly speculative, opinion above, you might already guess what I think about the implementation of strong social incentives that might be the result of face-to-face meetings between people interested to refine the art of rationality and learn about the nature of reality rather than their own subjective opinions and biases.
(I guess “interested” should be “disinterested” here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
I wasn’t clear enough, I didn’t expect the comment to get that much attention (which does disprove some of my above points, I hope so). What I meant by “interested researchers rather than people who ask others to take their ideas seriously” is the difference between someone who studies a topic due to academic curiosity versus someone who writes about a topic to convince people to contribute money to his charity. I don’t know how to say that without sounding rude or sneaking in connotations. Yes, lesswrong was created to support the mitigation of risks from AI (I can expand on this if you like, also see my comment here). Now this obviously sounds like I would want to imply that there might be motives involved other than trying to save humanity. I am not saying that, although there might be subconscious motivations those people aren’t even aware of themselves. I am just saying that it is another point that adds to the necessary caution that I perceive to be missing.
To be clear, I want that the SIAI gets enough support to research risks from AI. I am just saying that I would love to see a bit more caution when it comes to some overall conclusions. Taking ideas seriously is a good thing, to a reasonable extent. But my perception is that some people here hold unjustifiable strong beliefs that might be logical implications of some well-founded methods, but I would be careful not to go too far.
Please let me know if you want me to elaborate on any of the specific problems you mentioned.
It is the rare researcher who studies a topic solely out of academic curiosity. Grant considerations tend to put on heavy pressure to produce results, and quick, dammit, so you’d better study something that will let you write a paper or two.
Yes, you should watch out for bias in blog posts written by people you don’t know potentially trying to sell you their charity. No, you should not relax that watchfulness when the author of whatever you’re reading has Ph. D.
That is really a weak point I made there. It was not meant to be an argument but just a guess. I also don’t want to accuse people of being overly interested to create a community in and of itself rather than a community with the overall aim to seek truth. I apologize for hinting at that possibility.
Let me expand on how I came to make that statement in the first place. I have always been more than a bit skeptical about the reputation system employed on lesswrong. I think that it might unconsciously lead people to agree because even slight disagreement might accumulate to negative karma over time. And even if, on some level, you don’t care about karma, each time you are downvoted it gives you a negative incentive not to voice that opinion the next time or to change how you portray it. I noticed that I myself, although I believe not to care much about my rank within this community, become increasingly reluctant to say something that I know will lead to negative karma. This of course works insofar as it maximizes the content the collective intelligence of all people on lesswrong is interested in. But that content might be biased and to some extent dishonest. Are we really good at collectively deciding what we want to see more of, just by clicking two buttons that increases a reward number? I am skeptical.
Now if you take into account my, admittedly speculative, opinion above, you might already guess what I think about the implementation of strong social incentives that might be the result of face-to-face meetings between people interested to refine the art of rationality and learn about the nature of reality rather than their own subjective opinions and biases.
I wasn’t clear enough, I didn’t expect the comment to get that much attention (which does disprove some of my above points, I hope so). What I meant by “interested researchers rather than people who ask others to take their ideas seriously” is the difference between someone who studies a topic due to academic curiosity versus someone who writes about a topic to convince people to contribute money to his charity. I don’t know how to say that without sounding rude or sneaking in connotations. Yes, lesswrong was created to support the mitigation of risks from AI (I can expand on this if you like, also see my comment here). Now this obviously sounds like I would want to imply that there might be motives involved other than trying to save humanity. I am not saying that, although there might be subconscious motivations those people aren’t even aware of themselves. I am just saying that it is another point that adds to the necessary caution that I perceive to be missing.
To be clear, I want that the SIAI gets enough support to research risks from AI. I am just saying that I would love to see a bit more caution when it comes to some overall conclusions. Taking ideas seriously is a good thing, to a reasonable extent. But my perception is that some people here hold unjustifiable strong beliefs that might be logical implications of some well-founded methods, but I would be careful not to go too far.
Please let me know if you want me to elaborate on any of the specific problems you mentioned.
It is the rare researcher who studies a topic solely out of academic curiosity. Grant considerations tend to put on heavy pressure to produce results, and quick, dammit, so you’d better study something that will let you write a paper or two.
Yes, you should watch out for bias in blog posts written by people you don’t know potentially trying to sell you their charity. No, you should not relax that watchfulness when the author of whatever you’re reading has Ph. D.