you say that, but this post is terrible for convincing people, and I knew it would be as I wrote it, hopefully that’s quite obvious. I continue to not be sure what part of my brain’s model of the way the world works as a whole is relevant to why I think his approach doesn’t work, I don’t even seem to be able to express a first order approximation. It just seems so obvious—which might mean I’m wrong, it might mean I deeply understand something to the point that I no longer know what the beginner explanation is, it might mean I’m bouncing off thinking about this in detail because the relevant models have an abort() in the paths I need to reason about this. actually, that last one sounds likely… hmm. eg, an approximate model that has validity guards so I don’t crash my social thinking? I guess?
like, yud is going around pissing off the acc folks in unnecessary ways. I think it’s possible to have better concentration of focus on what ways he irritates them—he’s not going to stop irritating most of them while making his points, but. but. idk.
Part of the problem might be twitter. If you’re on twitter, you are subject to the agency of the twitter recommender, which wants to upvote you when you say things that generate conflict. if you as a human do RL on twitter, you will be RL trained by the twitter algo to do … <the bad thing he’s doing>. but he did it long before twitter, too, it’s just particularly important now.
See my post AI scares and changing public beliefs for one theory of exactly why what Yudkowsky is doing is a bad idea. I was of course primarily thinking of his approach when writing about polarization.
The other post I’ve been contemplating writing is “An unrecognized goddamn principle of fucking rational discourse: be fucking nice”. Yudkowsky talks down to people. That’s not nice, and it makes them emotionally want to prove him wrong instead of want to find ways to agree with him.
I should clarify that being right and convincing people are right are NOT orthogonal here on less wrong. If you can explain why you’re sure you’re right here, it will convince people you’re right. Writing posts like this one is a way to draw people to a worthy project here.
I think you’re right and I think talking about this here is the right way to make sure that’s true and figure out what to collectively do about this issue.
you say that, but this post is terrible for convincing people, and I knew it would be as I wrote it, hopefully that’s quite obvious. I continue to not be sure what part of my brain’s model of the way the world works as a whole is relevant to why I think his approach doesn’t work, I don’t even seem to be able to express a first order approximation. It just seems so obvious—which might mean I’m wrong, it might mean I deeply understand something to the point that I no longer know what the beginner explanation is, it might mean I’m bouncing off thinking about this in detail because the relevant models have an abort() in the paths I need to reason about this. actually, that last one sounds likely… hmm. eg, an approximate model that has validity guards so I don’t crash my social thinking? I guess?
like, yud is going around pissing off the acc folks in unnecessary ways. I think it’s possible to have better concentration of focus on what ways he irritates them—he’s not going to stop irritating most of them while making his points, but. but. idk.
Part of the problem might be twitter. If you’re on twitter, you are subject to the agency of the twitter recommender, which wants to upvote you when you say things that generate conflict. if you as a human do RL on twitter, you will be RL trained by the twitter algo to do … <the bad thing he’s doing>. but he did it long before twitter, too, it’s just particularly important now.
See my post AI scares and changing public beliefs for one theory of exactly why what Yudkowsky is doing is a bad idea. I was of course primarily thinking of his approach when writing about polarization.
The other post I’ve been contemplating writing is “An unrecognized goddamn principle of fucking rational discourse: be fucking nice”. Yudkowsky talks down to people. That’s not nice, and it makes them emotionally want to prove him wrong instead of want to find ways to agree with him.
I should clarify that being right and convincing people are right are NOT orthogonal here on less wrong. If you can explain why you’re sure you’re right here, it will convince people you’re right. Writing posts like this one is a way to draw people to a worthy project here.
I think you’re right and I think talking about this here is the right way to make sure that’s true and figure out what to collectively do about this issue.