I agree that more research on effect of recommendation algorithms on the brain would be useful.
Also research looking at which cognitive biases and preferences the algorithms are exploiting, and who is most susceptible to these e.g. children, neurodiverse etc. It seems plausible to me that some ai applications e.g. character.ai as you say, will be optimising on some sort of human interaction and exploiting human biases and cognitive patterns will be a big part of this.
I think incentives. Based on my recent reading of ‘The Chaos Machine’ by Max Fisher, I think it’s closely linked to continually increasing engagement driving profit. Addictiveness unfortunately leads to more engagement, which in turn leads to profit. Emotive content (clickbait style, extreme things) also increase engagement.
Tools and moderation processes might be expensive on their own, but I think it’s when they start to challenge the underlying business model of ‘More engagement = more profit’ that the companies are in a more uncomfortable position.