to be clear I think he’s not implementing good strategy in the face of the technical strategic landscape. some of the things he’s suggesting that sound awful to me may be good strategy if he phrased them in less misleading ways. but he’s doing the thing where you’re only required to be honest with the high detail interpretation of your sentence but allowed to mislead structurally, which is a thing that shows up in public communications from high skill technical people who don’t consider the vibes interpretation to be valid at all.
you say that, but this post is terrible for convincing people, and I knew it would be as I wrote it, hopefully that’s quite obvious. I continue to not be sure what part of my brain’s model of the way the world works as a whole is relevant to why I think his approach doesn’t work, I don’t even seem to be able to express a first order approximation. It just seems so obvious—which might mean I’m wrong, it might mean I deeply understand something to the point that I no longer know what the beginner explanation is, it might mean I’m bouncing off thinking about this in detail because the relevant models have an abort() in the paths I need to reason about this. actually, that last one sounds likely… hmm. eg, an approximate model that has validity guards so I don’t crash my social thinking? I guess?
like, yud is going around pissing off the acc folks in unnecessary ways. I think it’s possible to have better concentration of focus on what ways he irritates them—he’s not going to stop irritating most of them while making his points, but. but. idk.
Part of the problem might be twitter. If you’re on twitter, you are subject to the agency of the twitter recommender, which wants to upvote you when you say things that generate conflict. if you as a human do RL on twitter, you will be RL trained by the twitter algo to do … <the bad thing he’s doing>. but he did it long before twitter, too, it’s just particularly important now.
See my post AI scares and changing public beliefs for one theory of exactly why what Yudkowsky is doing is a bad idea. I was of course primarily thinking of his approach when writing about polarization.
The other post I’ve been contemplating writing is “An unrecognized goddamn principle of fucking rational discourse: be fucking nice”. Yudkowsky talks down to people. That’s not nice, and it makes them emotionally want to prove him wrong instead of want to find ways to agree with him.
I should clarify that being right and convincing people are right are NOT orthogonal here on less wrong. If you can explain why you’re sure you’re right here, it will convince people you’re right. Writing posts like this one is a way to draw people to a worthy project here.
I think you’re right and I think talking about this here is the right way to make sure that’s true and figure out what to collectively do about this issue.
No, he’s not right at all. That extends to a lot of pessimists on AI, but he is not, in fact right, even if he uses strong language and is confident in an outcome.
But he’s right?
to be clear I think he’s not implementing good strategy in the face of the technical strategic landscape. some of the things he’s suggesting that sound awful to me may be good strategy if he phrased them in less misleading ways. but he’s doing the thing where you’re only required to be honest with the high detail interpretation of your sentence but allowed to mislead structurally, which is a thing that shows up in public communications from high skill technical people who don’t consider the vibes interpretation to be valid at all.
Being right and being good at convincing people you’re right are not orthogonal, but they’re closer than we’d like to think.
you say that, but this post is terrible for convincing people, and I knew it would be as I wrote it, hopefully that’s quite obvious. I continue to not be sure what part of my brain’s model of the way the world works as a whole is relevant to why I think his approach doesn’t work, I don’t even seem to be able to express a first order approximation. It just seems so obvious—which might mean I’m wrong, it might mean I deeply understand something to the point that I no longer know what the beginner explanation is, it might mean I’m bouncing off thinking about this in detail because the relevant models have an abort() in the paths I need to reason about this. actually, that last one sounds likely… hmm. eg, an approximate model that has validity guards so I don’t crash my social thinking? I guess?
like, yud is going around pissing off the acc folks in unnecessary ways. I think it’s possible to have better concentration of focus on what ways he irritates them—he’s not going to stop irritating most of them while making his points, but. but. idk.
Part of the problem might be twitter. If you’re on twitter, you are subject to the agency of the twitter recommender, which wants to upvote you when you say things that generate conflict. if you as a human do RL on twitter, you will be RL trained by the twitter algo to do … <the bad thing he’s doing>. but he did it long before twitter, too, it’s just particularly important now.
See my post AI scares and changing public beliefs for one theory of exactly why what Yudkowsky is doing is a bad idea. I was of course primarily thinking of his approach when writing about polarization.
The other post I’ve been contemplating writing is “An unrecognized goddamn principle of fucking rational discourse: be fucking nice”. Yudkowsky talks down to people. That’s not nice, and it makes them emotionally want to prove him wrong instead of want to find ways to agree with him.
I should clarify that being right and convincing people are right are NOT orthogonal here on less wrong. If you can explain why you’re sure you’re right here, it will convince people you’re right. Writing posts like this one is a way to draw people to a worthy project here.
I think you’re right and I think talking about this here is the right way to make sure that’s true and figure out what to collectively do about this issue.
No, he’s not right at all. That extends to a lot of pessimists on AI, but he is not, in fact right, even if he uses strong language and is confident in an outcome.