I’m brainstorming ways this post may be off the mark. Curious if you have any :)
You can personalise an AI service across some dimensions that won’t make it more resemble an agent acting on a person’s behalf (or won’t meet all criteria of ‘agentiness’)
not acting *over time* - more like a bespoke tool customised once to a customer’s preferred parameters, e.g. a website-builder like wix.com
an AI service personalising content according to a user’s likes/reads/don’t show clicks isn’t agent-like
efficient personalised services will be built on swappable modules and/or shared repositories of consumer preference components and contexts (meaning that the company never actually runs an independent instantiation of the service)
Personalisation of AI services will fall short of delegated agents except in a few niches because of lack of demand or supply
a handful of the largest software corporations (FAAMG, etc.) have locked in customers into networks and routines but are held back from personalising customer experiences because they tend to rely on third-party revenue streams
it’s generally more profitable to specialise in and market a service that caters to either high-paying discerning customers, or a broad mass audience that’s basically okay with anything you give them
too hard to manage mass customisation or not cost-effective compared to other forms of business innovation
humans are already well-adapted and trained for providing personalised services; AI can compete better in other areas
humans already have very similar preferences within the space of theoretical possibilities – making catering to individual differences less fruitful than you’d intuitively think
it’s easier to use AI to shape users to have more homogenous preferences than to cater to preference differences
eliciting human preferences takes up too much of the user’s attention and/or runs up against too many possible interpretations (based on assumptions of user’s rationality and prior knowledge, as well as relevant contextual cues) to work
you can make more commercial progress by designing and acclimatising users to a common interface that allows those users to meet their diverging preferences themselves (than to design AI interfaces that elicits the users’ preferences and acts on their behalf)
software engineers need a rare mix of thing- and person-oriented skills to develop delegated agents
a series of bad publicity incidents impede further development (analogous to self-driving car crashes)
data protection or anonymisation laws in Europe and beyond limit personalisation efforts (or further down the line, restrictions on autonomous algorithms do)
doesn’t fit current zeitgeist somehow in high-income nations
Research on how much influence delegated agents might offer can, besides being really speculative, be misused or promote competitive dynamics
Context assumptions:
Delegated agents will be developed first inside say military labs (or other organisational structures in other places) that involve meaningfully dissimilar interactions than at a Silicon Valley start-up.
Initial contexts in which delegated agents are produced and used really don’t matter for how AI designs are deployed in later decades (something like, it’s overdetermined)
Conceptual confusion:
Terms in this post are ambiguous or used to refer to different things (e.g. general AI ‘tasks’ vs. ‘tasks’ humans conceive and act on, ‘service’ infrastructure vs. online ‘service’ aimed at human users, ‘virtual assistant’ conventionally means a remote human assistant, ‘model’)
An ‘AI agent’ is a vague, leaky concept that should be replaced with more exacting dimensions and mechanisms
Carving out humans and algorithms into separate individuals with separate ‘preferences’ is a fundamentally impoverished notion. This post assumes that perspective and therefore fosters mistaken/unskillful reasoning.
I’m brainstorming ways this post may be off the mark. Curious if you have any :)
You can personalise an AI service across some dimensions that won’t make it more resemble an agent acting on a person’s behalf (or won’t meet all criteria of ‘agentiness’)
not acting *over time* - more like a bespoke tool customised once to a customer’s preferred parameters, e.g. a website-builder like wix.com
an AI service personalising content according to a user’s likes/reads/don’t show clicks isn’t agent-like
efficient personalised services will be built on swappable modules and/or shared repositories of consumer preference components and contexts (meaning that the company never actually runs an independent instantiation of the service)
Personalisation of AI services will fall short of delegated agents except in a few niches because of lack of demand or supply
a handful of the largest software corporations (FAAMG, etc.) have locked in customers into networks and routines but are held back from personalising customer experiences because they tend to rely on third-party revenue streams
it’s generally more profitable to specialise in and market a service that caters to either high-paying discerning customers, or a broad mass audience that’s basically okay with anything you give them
too hard to manage mass customisation or not cost-effective compared to other forms of business innovation
humans are already well-adapted and trained for providing personalised services; AI can compete better in other areas
humans already have very similar preferences within the space of theoretical possibilities – making catering to individual differences less fruitful than you’d intuitively think
it’s easier to use AI to shape users to have more homogenous preferences than to cater to preference differences
eliciting human preferences takes up too much of the user’s attention and/or runs up against too many possible interpretations (based on assumptions of user’s rationality and prior knowledge, as well as relevant contextual cues) to work
you can make more commercial progress by designing and acclimatising users to a common interface that allows those users to meet their diverging preferences themselves (than to design AI interfaces that elicits the users’ preferences and acts on their behalf)
software engineers need a rare mix of thing- and person-oriented skills to develop delegated agents
a series of bad publicity incidents impede further development (analogous to self-driving car crashes)
data protection or anonymisation laws in Europe and beyond limit personalisation efforts (or further down the line, restrictions on autonomous algorithms do)
doesn’t fit current zeitgeist somehow in high-income nations
Research directions aren’t priorities
Advances in preference learning will be used for other unhelpful stuff (just read Andrew Critch’s post)
Research on how much influence delegated agents might offer can, besides being really speculative, be misused or promote competitive dynamics
Context assumptions:
Delegated agents will be developed first inside say military labs (or other organisational structures in other places) that involve meaningfully dissimilar interactions than at a Silicon Valley start-up.
Initial contexts in which delegated agents are produced and used really don’t matter for how AI designs are deployed in later decades (something like, it’s overdetermined)
Conceptual confusion:
Terms in this post are ambiguous or used to refer to different things (e.g. general AI ‘tasks’ vs. ‘tasks’ humans conceive and act on, ‘service’ infrastructure vs. online ‘service’ aimed at human users, ‘virtual assistant’ conventionally means a remote human assistant, ‘model’)
An ‘AI agent’ is a vague, leaky concept that should be replaced with more exacting dimensions and mechanisms
Carving out humans and algorithms into separate individuals with separate ‘preferences’ is a fundamentally impoverished notion. This post assumes that perspective and therefore fosters mistaken/unskillful reasoning.