My understanding is that institutions can create an internal “game of telephone” while believing that they are listening to the opposing opinions.
Basically, information gets diluted at each step:
you tell someone that “X is a problem”, but they understand it merely as “we need to provide some verbal justification why we are going to do X anyway”, i.e. they don’t treat it as an actual problem, but as a mere bureaucratic obstacle to be navigated around;
your friend tells you that “X is a huge problem”, but your takeaway is merely that “X is a problem”, because on one hand you trust your friend, on the other hand you think your friend is too obsessed with X and exaggerates the impact;
your friend had an inside information (which is the reason you have actually decided to listen to him in the first place) that X will probably kill everyone, but he realizes that if he tells it this way, you will probably decide that he is insane, so he instead chooses to put it diplomatically as “X is a huge problem”.
Taken together… you hired someone as an expert to provide an important information (and you congratulate yourself for doing it), but ultimately you ignored everything he said. And everyone felt happy in the process… your friend felt happy that the people making the important decisions were listening to him… your organization felt happy for having an ISO-certified process to listen to outside critics… except that ultimately nothing happened.
I am not saying that in a parallel reality where your friend instead wrote a Facebook post “we are all going to die” (and was ignored by everyone who matters) had a better outcome. But it had less self-deception by everyone involved.
My understanding is that institutions can create an internal “game of telephone” while believing that they are listening to the opposing opinions.
Basically, information gets diluted at each step:
you tell someone that “X is a problem”, but they understand it merely as “we need to provide some verbal justification why we are going to do X anyway”, i.e. they don’t treat it as an actual problem, but as a mere bureaucratic obstacle to be navigated around;
your friend tells you that “X is a huge problem”, but your takeaway is merely that “X is a problem”, because on one hand you trust your friend, on the other hand you think your friend is too obsessed with X and exaggerates the impact;
your friend had an inside information (which is the reason you have actually decided to listen to him in the first place) that X will probably kill everyone, but he realizes that if he tells it this way, you will probably decide that he is insane, so he instead chooses to put it diplomatically as “X is a huge problem”.
Taken together… you hired someone as an expert to provide an important information (and you congratulate yourself for doing it), but ultimately you ignored everything he said. And everyone felt happy in the process… your friend felt happy that the people making the important decisions were listening to him… your organization felt happy for having an ISO-certified process to listen to outside critics… except that ultimately nothing happened.
I am not saying that in a parallel reality where your friend instead wrote a Facebook post “we are all going to die” (and was ignored by everyone who matters) had a better outcome. But it had less self-deception by everyone involved.