Shard theory suggests that goals are more natural to specify/inculcate in their shard-forms (e.g. if around trash and a trash can, put the trash away), and not in their (presumably) final form of globally activated optimization of a coherent utility function which is the reflective equilibrium of inter-shard value-handshakes (e.g. a utility function over the agent’s internal plan-ontology such that, when optimized directly, leads to trash getting put away, among other utility-level reflections of initial shards).
I could (and did) hope that I could specify a utility function which is safe to maximize because it penalizes power-seeking. I may as well have hoped to jump off of a building and float to the ground. On my model, that’s just not how goals work in intelligent minds. If we’ve had anything at all beaten into our heads by our alignment thought experiments, it’s that goals are hard to specify in their final form of utility functions.
I think it’s time to think in a different specification language.
Agreed. I think power-seeking and other instrumental goals (e.g. survival, non-corrigibility) are just going to inevitably arise, and that if shard theory works for superintelligence, it will by taking this into account and balancing these instrumental goals against deliberately installed shards which counteract them. I currently have the hypothesis (held loosely) that I would like to test (work in progress) that it’s easier to ‘align’ a toy model of a power-seeking RL agent if the agent has lots and lots of competing desires whose weights are frequently changing, than an agent with a simpler set of desires and/or more statically weighted set of desires. Something maybe about the meta-learning of ’my desires change, so part of meta-level power-seeking should be not object-level power-seeking so hard that I sacrifice my ability to optimize for different object level goals). Unclear. I’m hoping that setting up an experimental framework and gathering data will show patterns that help clarify the issues involved.
Shard theory suggests that goals are more natural to specify/inculcate in their shard-forms (e.g. if around trash and a trash can, put the trash away), and not in their (presumably) final form of globally activated optimization of a coherent utility function which is the reflective equilibrium of inter-shard value-handshakes (e.g. a utility function over the agent’s internal plan-ontology such that, when optimized directly, leads to trash getting put away, among other utility-level reflections of initial shards).
I could (and did) hope that I could specify a utility function which is safe to maximize because it penalizes power-seeking. I may as well have hoped to jump off of a building and float to the ground. On my model, that’s just not how goals work in intelligent minds. If we’ve had anything at all beaten into our heads by our alignment thought experiments, it’s that goals are hard to specify in their final form of utility functions.
I think it’s time to think in a different specification language.
Agreed. I think power-seeking and other instrumental goals (e.g. survival, non-corrigibility) are just going to inevitably arise, and that if shard theory works for superintelligence, it will by taking this into account and balancing these instrumental goals against deliberately installed shards which counteract them. I currently have the hypothesis (held loosely) that I would like to test (work in progress) that it’s easier to ‘align’ a toy model of a power-seeking RL agent if the agent has lots and lots of competing desires whose weights are frequently changing, than an agent with a simpler set of desires and/or more statically weighted set of desires. Something maybe about the meta-learning of ’my desires change, so part of meta-level power-seeking should be not object-level power-seeking so hard that I sacrifice my ability to optimize for different object level goals). Unclear. I’m hoping that setting up an experimental framework and gathering data will show patterns that help clarify the issues involved.