Yup, I think research that studies the effect of recommendation algorithms on the brain, from various social media platforms and compares them to the effects of narcotics, would be extremely useful. I think we’re really really lacking in decent legislation for recommendation algorithms atm—at the absolute bare minimum, platforms which use very addictive algorithms should have some kind of warning label informing users of the possibility of addiction—similarly to cigarettes—so that parents know clearly what might happen to their children. This is going to be even more important as things like character.ai grow.
I agree that more research on effect of recommendation algorithms on the brain would be useful.
Also research looking at which cognitive biases and preferences the algorithms are exploiting, and who is most susceptible to these e.g. children, neurodiverse etc. It seems plausible to me that some ai applications e.g. character.ai as you say, will be optimising on some sort of human interaction and exploiting human biases and cognitive patterns will be a big part of this.
Yup, I think research that studies the effect of recommendation algorithms on the brain, from various social media platforms and compares them to the effects of narcotics, would be extremely useful.
I think we’re really really lacking in decent legislation for recommendation algorithms atm—at the absolute bare minimum, platforms which use very addictive algorithms should have some kind of warning label informing users of the possibility of addiction—similarly to cigarettes—so that parents know clearly what might happen to their children.
This is going to be even more important as things like character.ai grow.
Yup, we should create an equivalent of the Nutri-Score for different recommendation AIs.
I agree that more research on effect of recommendation algorithms on the brain would be useful.
Also research looking at which cognitive biases and preferences the algorithms are exploiting, and who is most susceptible to these e.g. children, neurodiverse etc. It seems plausible to me that some ai applications e.g. character.ai as you say, will be optimising on some sort of human interaction and exploiting human biases and cognitive patterns will be a big part of this.
Yes, this would be very very good. I might hold a hackathon/ideathon for this in January.