I’ve found something like this useful, especially at work, but hard to calibrate. “What would a more less shy kalium do? Tell the CTO that he’s wrong, because he’s wrong.” Sometimes this is a good idea, but sometimes it’s not. “What would an optimally shy kalium do?” is not so easy to predict.
Perhaps your simulated assistant is optimized for the wrong thing, and you actually want Kalium Who Acts With Regard to the Greater Good of the Project or similar. “Don’t be shy” is orthogonal to “someone in charge is making it difficult to get stuff done”.
I’ve found something like this useful, especially at work, but hard to calibrate. “What would a more less shy kalium do? Tell the CTO that he’s wrong, because he’s wrong.” Sometimes this is a good idea, but sometimes it’s not. “What would an optimally shy kalium do?” is not so easy to predict.
Perhaps your simulated assistant is optimized for the wrong thing, and you actually want Kalium Who Acts With Regard to the Greater Good of the Project or similar. “Don’t be shy” is orthogonal to “someone in charge is making it difficult to get stuff done”.