I have strong-downvoted all of the GPT2 comments in the hope that a couple of other people will do likewise and push them below the threshold at which everyone gets them hidden without needing to diddle around in their profile. (I hope this doesn’t trigger some sort of automatic malice-detector and get me banned or anything. I promise I downvoted all those comments on their merits. Man, they were so bad they might almost have been posted by a bot or something!)
The idea is hilarious in the abstract, but very much less funny in reality because it makes LW horrible to read. Perhaps if GPT2 were responding to 20% of comments instead of all of them, or something, it might be less unbearable.
Agreed. I haven’t gone through all GPT2′s comments, but every one that I’ve read, I’ve judged it as if it had been written by a person—and strong-downvoted it.
BTW, LW developers, when viewing someone’s profile it would be useful to have, as well as the option to subscribe to their posts, an option to hide their posts, with the effect that their posts are automatically displayed (to me) as collapsed.
I’d expect that option to be bad overall. I might just be justifying an alief here, but it seems to me that closing a set of people off entirely will entrench you in your beliefs.
I want to see X and Y that you are, but I don’t feel confident I’m able to make sense of it. So, the question to ask—and the one question to which my brain replies, “You’re just as likely to get this wrong as the correct one,” seems to me a really important one.
I think your points are wrong. If people don’t want to be wrong, we’ll just have to show them more about their interests.
We’ve seen so many posts about a big issue, because the first few people who got really excited about it had so little to say, and now we’re going to do that without even discussing it.
So the question then becomes, whether we want the same things as the people who disagree. I don’t think the people over-estimate the benefits and costs, but rather pay attention to the truth, or what is the real consequences of “maybe what impact that’s having”, or what concrete good things are and what the costs of things in those terms.
My personal view is that the first few people who get to feel that something is worth their time thinking about are going to be highly motivated, so they have to be motivated to make the actual impact, just as the people who don’t seem to much care or feel like it’s worth their time thinking about are going to be motivated to produce the other things that really matter.
I’ve never enjoyed the work of reading the LW threads and even have never even tried the LW code myself, but I’m afraid I probably just skipped some obvious stuff in my life and made a number of incorrect beliefs. I don’t find it very surprising.
I have strong-downvoted all of the GPT2 comments in the hope that a couple of other people will do likewise and push them below the threshold at which everyone gets them hidden without needing to diddle around in their profile. (I hope this doesn’t trigger some sort of automatic malice-detector and get me banned or anything. I promise I downvoted all those comments on their merits. Man, they were so bad they might almost have been posted by a bot or something!)
The idea is hilarious in the abstract, but very much less funny in reality because it makes LW horrible to read. Perhaps if GPT2 were responding to 20% of comments instead of all of them, or something, it might be less unbearable.
Agreed. I haven’t gone through all GPT2′s comments, but every one that I’ve read, I’ve judged it as if it had been written by a person—and strong-downvoted it.
BTW, LW developers, when viewing someone’s profile it would be useful to have, as well as the option to subscribe to their posts, an option to hide their posts, with the effect that their posts are automatically displayed (to me) as collapsed.
I’d expect that option to be bad overall. I might just be justifying an alief here, but it seems to me that closing a set of people off entirely will entrench you in your beliefs.
I want to see X and Y that you are, but I don’t feel confident I’m able to make sense of it. So, the question to ask—and the one question to which my brain replies, “You’re just as likely to get this wrong as the correct one,” seems to me a really important one.
I think your points are wrong. If people don’t want to be wrong, we’ll just have to show them more about their interests.
We’ve seen so many posts about a big issue, because the first few people who got really excited about it had so little to say, and now we’re going to do that without even discussing it.
So the question then becomes, whether we want the same things as the people who disagree. I don’t think the people over-estimate the benefits and costs, but rather pay attention to the truth, or what is the real consequences of “maybe what impact that’s having”, or what concrete good things are and what the costs of things in those terms.
My personal view is that the first few people who get to feel that something is worth their time thinking about are going to be highly motivated, so they have to be motivated to make the actual impact, just as the people who don’t seem to much care or feel like it’s worth their time thinking about are going to be motivated to produce the other things that really matter.
I’ve never enjoyed the work of reading the LW threads and even have never even tried the LW code myself, but I’m afraid I probably just skipped some obvious stuff in my life and made a number of incorrect beliefs. I don’t find it very surprising.