Independent of potential for growing into AGI and {S,X}-risk resulting from that?
With the understanding that these are very rough descriptions that need much more clarity and nuance, that one or two of them might be flat out wrong, that some of them might turn out to be impossible to codify usefully in practice, that there there might be specific exceptions for some of them, and that the list isn’t necessarily complete--
Recommendation systems that optimize for “engagement” (or proxy measures thereof).
Anything that identifies or tracks people, or proxies like vehicles, in spaces open to the public. Also collection of data that would be useful for this.
Anything that mass-classifies private communications, including closed group communications, for any use by anybody not involved in the communication.
Anything specifically designed to produce media showing real people in false situations or to show them saying or doing things they have not actually done.
Anything that adaptively tries to persuade anybody to buy anything or give anybody money, or to hold or not hold any opinion of any person or organization.
Anything that tries to make people anthropomorphize it or develop affection for it.
Anything that tries to classify humans into risk groups based on, well, anything.
Anything that purports to read minds or act as a lie detector, live or on recorded or written material.
Good list. Another one that caught my attention that I saw in the EU act was AIs specialised into subliminal messages. people’s choices can be somewhat conditioned in favor or against things in certain ways by feeding them sensory data even if it’s not consciously perceptible, it can also affect their emotional states more broadly.
I don’t know how effective this stuff is in real life, but I know that it at least works.
Anything that tries to classify humans into risk groups based on, well, anything.
A particular example of that one is systems of social scoring, which are surely gonna be used by authoritarian regimes. You can screw people up in so many ways when social control is centralised with AI systems. It’s great to punish people for not being chauvinists
Which? I wonder.
Independent of potential for growing into AGI and {S,X}-risk resulting from that?
With the understanding that these are very rough descriptions that need much more clarity and nuance, that one or two of them might be flat out wrong, that some of them might turn out to be impossible to codify usefully in practice, that there there might be specific exceptions for some of them, and that the list isn’t necessarily complete--
Recommendation systems that optimize for “engagement” (or proxy measures thereof).
Anything that identifies or tracks people, or proxies like vehicles, in spaces open to the public. Also collection of data that would be useful for this.
Anything that mass-classifies private communications, including closed group communications, for any use by anybody not involved in the communication.
Anything specifically designed to produce media showing real people in false situations or to show them saying or doing things they have not actually done.
Anything that adaptively tries to persuade anybody to buy anything or give anybody money, or to hold or not hold any opinion of any person or organization.
Anything that tries to make people anthropomorphize it or develop affection for it.
Anything that tries to classify humans into risk groups based on, well, anything.
Anything that purports to read minds or act as a lie detector, live or on recorded or written material.
Good list. Another one that caught my attention that I saw in the EU act was AIs specialised into subliminal messages. people’s choices can be somewhat conditioned in favor or against things in certain ways by feeding them sensory data even if it’s not consciously perceptible, it can also affect their emotional states more broadly.
I don’t know how effective this stuff is in real life, but I know that it at least works.
A particular example of that one is systems of social scoring, which are surely gonna be used by authoritarian regimes. You can screw people up in so many ways when social control is centralised with AI systems. It’s great to punish people for not being chauvinists
This is already beginning in China.