There is no guarantee that there exists some way for them to understand.
Consider the possibility that it’s only possible for people with nontrivial level of understanding to work with 5TB+ amounts of data. It could be a practical boost in capability due to understanding storage technology principles and tools… maybe?
What level of sophistication would you think is un-idiot-proof-able? Nuclear missiles? not-proven-to-be-friendly-AI?
I think it would be interesting if we weigh the benefits of human desire modification in all its forms (ranging from strategies like delayed gratification to brain pleasure centre stimulation: covered very well in this fun theory sequence article ) against the costs of continuous improvement.
Some of these costs :
Resource exhaustion : There is always the risk of using up resources earlier for relatively unimportant things, and facing constraints for later, more important, purposes. This risk ends up materialising more often as we develop faster. Undoing material exhaustion is difficult, while energy is impossible.
Environmental limits : Excessive global warming, pollution, etc. impose costs on humans
Economic : Continuous uncoordinated development likely misallocates resources due to various market imperfections
Social : Creating winners and losers is harmful to the happiness of people
Psychological : If we cannot improve as fast as we adapt to the resulting happiness, then we get less happy
A lot of singularitarian thought tries to holds human desire to be exogenous and untouchable, which seems to be a rather odd blind-spot to have… we rightly discard the notion that death is desirable because it is natural, but not the notion that desire is sacred and hence should always be fulfilled, fighting against any and all limits?