You don’t use eloquence for that. Eloquence is more for eg waking someone up and making it easier for them to learn and remember ideas that you think they’ll be glad to have learned and remembered.
If you want to express how important you think something is, you can make a public prediction that it’s important and explain why you made that prediction, and people who know things you don’t can put your arguments into the context of their own knowledge and make their own predictions.
There may have been other, unmentioned optimization targets that also need eloquence
Predictions:
(75%) Groups who successfully[1] adopt trust technology will economically and politically outcompete the rest of their respective societies rather quickly (less than 10 years).
The efficiency gains feasibly up for grabs in the first 15 years compared to statusquo are over 100% (75%) or over 400% (50%).
(66%) Society-wide adoption of trustbuilding tech is a practical path / perhaps the only practical path towards sane politics in general and sane AI politics in particular.
The whole gestalt of why this is a huge affordance seems self-evident to me, it’s a cognitive weakness of mine to often not know which parts of my thinking need more words written out loud to be legible.
But one intuition is: Regular “natural” human cultures are accidental products sampled from environments where deception-heavy strategies are dominant, and this imposes large deadweight costs on all pursuits of value, including economic value, happiness, friendship, and morality. Explicitly: Most of our cognition goes into deceiving others, and the density of useful acts could be multiple times higher.
You don’t use eloquence for that. Eloquence is more for eg waking someone up and making it easier for them to learn and remember ideas that you think they’ll be glad to have learned and remembered.
If you want to express how important you think something is, you can make a public prediction that it’s important and explain why you made that prediction, and people who know things you don’t can put your arguments into the context of their own knowledge and make their own predictions.
There may have been other, unmentioned optimization targets that also need eloquence
Predictions:
(75%) Groups who successfully[1] adopt trust technology will economically and politically outcompete the rest of their respective societies rather quickly (less than 10 years).
The efficiency gains feasibly up for grabs in the first 15 years compared to statusquo are over 100% (75%) or over 400% (50%).
(66%) Society-wide adoption of trustbuilding tech is a practical path / perhaps the only practical path towards sane politics in general and sane AI politics in particular.
The whole gestalt of why this is a huge affordance seems self-evident to me, it’s a cognitive weakness of mine to often not know which parts of my thinking need more words written out loud to be legible.
But one intuition is: Regular “natural” human cultures are accidental products sampled from environments where deception-heavy strategies are dominant, and this imposes large deadweight costs on all pursuits of value, including economic value, happiness, friendship, and morality. Explicitly: Most of our cognition goes into deceiving others, and the density of useful acts could be multiple times higher.
i.e. build mutual understandings at least to, but ideally surpassing, the point of family-like intimacy / feeling the others as extensions of oneself