Beliefs, after all, are for true things, and if you lose sight of that you will lose your epistemics. If you think only of what gives you an advantage in a debate, of what sounds nice, of what wins you the admiration of your peers, of what is politically correct, or of what you would prefer to be true, you will not be able to actually believe true things.
Paul Graham wrote about this in Persuade xor Discover:
The danger of the [version of an argument intended to persuade] is not merely that it’s longer. It’s that you start to lie to yourself. The ideas start to get mixed together with the spin you’ve added to get them past the readers’ misconceptions.
I think the goal of an essay should be to discover surprising things. That’s my goal, at least. And most surprising means most different from what people currently believe. So writing to persuade and writing to discover are diametrically opposed. The more your conclusions disagree with readers’ present beliefs, the more effort you’ll have to expend on selling your ideas rather than having them. As you accelerate, this drag increases, till eventually you reach a point where 100% of your energy is devoted to overcoming it and you can’t go any faster.
It’s hard enough to overcome one’s own misconceptions without having to think about how to get the resulting ideas past other people’s. I worry that if I wrote to persuade, I’d start to shy away unconsciously from ideas I knew would be hard to sell. When I notice something surprising, it’s usually very faint at first. There’s nothing more than a slight stirring of discomfort. I don’t want anything to get in the way of noticing it consciously.
This also reminded me of the Litany of Tarski:
Draco, let me introduce you to something I call the Litany of Tarski. It changes every time you use it. On this occasion it runs like so: If magic is fading out of the world, I want to believe that magic is fading out of the world. If magic is not fading out of the world, I want not to believe that magic is fading out of the world. Let me not become attached to beliefs I may not want.
All three of your examples involve using a phrase as a shorthand for a track record. You call something a pollution-reducing law, a vehicle-producer, or a fit athlete after observing consistent pollution reduction, vehicles, or field records. That’s like the doctor calling something a “sleeping pill”, which is ok because he’s doing that after observing its track record.
The problem is when there is no track record. For example, when someone proposes a new “environmental protection” law that has not really been tested, others who hear that name may be less skeptical than if they hear “subsidies for Teslas”. In the latter case, they may ask whether this would really help the environment and whether there might be unintended consequences.
The term “optimization power” doesn’t seem to add much here. Any prediction I make would be based on the track record you mentioned (using some model that “fits” that training data). For example, maybe we would predict it producing a good car, but not necessarily a movie or a laptop. Even for the examples of “optimization processes” mentioned in the article, such as humans and natural selection, I predict using the observed track record. If we say a chess player has reached a higher Elo than another, we can use that to predict that he’ll beat the other one. That will invite justified questions about the chess variant, their past matches, and recent forms. Why bring in the claim that he has more “optimization power”, which provokes fewer such questions?
Thanks for the thoughtful comment.