I think the main reason why we won’t censor search to some abstract conception of “community values” is because users won’t want to rent or purchase search services that are censor to such a broad target
It doesn’t describe reality. Most of us consume search and recommendations that has been censored (e.g. removing porn, piracy, toxicity, racism, taboo politics) in a way that pus cultural values over our preferences or interests.
So perhaps it won’t be true for AI either. At least in the near term, the line between AI and search is a blurred line, and the same pressures exist on consumers and providers.
In the near term AI and search are blurred, but that’s a separate topic. This post was about AGI as distinct from AI. There’s no sharp line between but there are important distinctions, and I’m afraid we’re confused as a group because of that blurring. More above, and it’s worth its own post and some sort of new clarifying terminology. The term AGI has been watered down to include LLMs that are fairly general, rather than the original and important meaning of AI that can think about anything, implying the ability to learn, and therefore almost necessarily to have explicit goals and agency. This was about that type of “real” AGI, which is still hypothetical even though increasingly plausible in the near term.
That’s true, they are different. But search still provides the closest historical analogue (maybe employees/suppliers provide another). Historical analogues have the benefit of being empirical and grounded, so I prefer them over (or with) pure reasoning or judgement.
I also expect AIs to be constrained by social norms, laws, and societal values. But I think there’s a distinction between how AIs will be constrained and how AIs will try to help humans. Although it often censors certain topics, Google still usually delivers the results the user wants, rather than serving some broader social agenda upon each query. Likewise, ChatGPT is constrained by social mores, but it’s still better described as a user assistant, not as an engine for social change or as a benevolent agent that acts on behalf of humanity.
When you rephrase this to be about search engines
It doesn’t describe reality. Most of us consume search and recommendations that has been censored (e.g. removing porn, piracy, toxicity, racism, taboo politics) in a way that pus cultural values over our preferences or interests.
So perhaps it won’t be true for AI either. At least in the near term, the line between AI and search is a blurred line, and the same pressures exist on consumers and providers.
In the near term AI and search are blurred, but that’s a separate topic. This post was about AGI as distinct from AI. There’s no sharp line between but there are important distinctions, and I’m afraid we’re confused as a group because of that blurring. More above, and it’s worth its own post and some sort of new clarifying terminology. The term AGI has been watered down to include LLMs that are fairly general, rather than the original and important meaning of AI that can think about anything, implying the ability to learn, and therefore almost necessarily to have explicit goals and agency. This was about that type of “real” AGI, which is still hypothetical even though increasingly plausible in the near term.
That’s true, they are different. But search still provides the closest historical analogue (maybe employees/suppliers provide another). Historical analogues have the benefit of being empirical and grounded, so I prefer them over (or with) pure reasoning or judgement.
I also expect AIs to be constrained by social norms, laws, and societal values. But I think there’s a distinction between how AIs will be constrained and how AIs will try to help humans. Although it often censors certain topics, Google still usually delivers the results the user wants, rather than serving some broader social agenda upon each query. Likewise, ChatGPT is constrained by social mores, but it’s still better described as a user assistant, not as an engine for social change or as a benevolent agent that acts on behalf of humanity.