I would agree that this is true. But there are lots of different communication skills, and humans are really bad at some of them, so “greatest strength” still leaves a lot of room for error. When I look at Ben and Luke’s dialogue, I see places where they speak past each other, or walk into walls. And of course what Ben said was basically “we had some particular problems with communicating”—my claim is just that those problems are something we should aim to overcome.
But I’m just an amateur. If we have any psychology grad students on here, maybe we should shanghai them into figuring out how to start a communication dojo.
I seen that sort of ‘talking past each other’ happen very often when one side doesn’t know the topic well enough for dialogue (but got a strong opinion anyway). I just don’t think it is useful to view it as purely ‘communication’ problem. Perhaps the communication is good enough, and the ideas being communicated are bad (faulty). That’s what you should expect with someone whose only notable accomplishments are at communicating, who’s failing with multiple other people including university professors.
Errors in communication are there, believe me. Maybe their first mistake was choosing too big a topic (everything we disagree about :P), because it seems like they felt pressure to “touch on” a bunch of points, rather than saying “hold on, let’s slow down and make sure we’re talking about the same thing.”
And if the other person is wrong and not a good communicator, there are still some things you can do to help the dialogue, though this is hard and I’m bad at it—changing yourself is easy by comparison. For example, if it turns out that you’re talking about two different things (e.g. AI as it is likely to be built vs. AI “in general”), you can be the one to move over and talk about the thing the other person wants to talk about.
Well, I estimate negative utility for giving ideas about AI ‘in general’ to people who don’t understand magnitude of distinction between AIs ‘in general’ (largely the AIs that could not be embedded within universe that has finite computational power), and the AIs that matter in practice.
I would agree that this is true. But there are lots of different communication skills, and humans are really bad at some of them, so “greatest strength” still leaves a lot of room for error. When I look at Ben and Luke’s dialogue, I see places where they speak past each other, or walk into walls. And of course what Ben said was basically “we had some particular problems with communicating”—my claim is just that those problems are something we should aim to overcome.
But I’m just an amateur. If we have any psychology grad students on here, maybe we should shanghai them into figuring out how to start a communication dojo.
I seen that sort of ‘talking past each other’ happen very often when one side doesn’t know the topic well enough for dialogue (but got a strong opinion anyway). I just don’t think it is useful to view it as purely ‘communication’ problem. Perhaps the communication is good enough, and the ideas being communicated are bad (faulty). That’s what you should expect with someone whose only notable accomplishments are at communicating, who’s failing with multiple other people including university professors.
Errors in communication are there, believe me. Maybe their first mistake was choosing too big a topic (everything we disagree about :P), because it seems like they felt pressure to “touch on” a bunch of points, rather than saying “hold on, let’s slow down and make sure we’re talking about the same thing.”
And if the other person is wrong and not a good communicator, there are still some things you can do to help the dialogue, though this is hard and I’m bad at it—changing yourself is easy by comparison. For example, if it turns out that you’re talking about two different things (e.g. AI as it is likely to be built vs. AI “in general”), you can be the one to move over and talk about the thing the other person wants to talk about.
Well, I estimate negative utility for giving ideas about AI ‘in general’ to people who don’t understand magnitude of distinction between AIs ‘in general’ (largely the AIs that could not be embedded within universe that has finite computational power), and the AIs that matter in practice.