The stability under self-modification is a core problem of AGI generally, isn’t it? So isn’t that an effort to solve AGI, not safety/friendliness (which would be fairly depressing given its stated goals)? Does MIRI have a way to define safety/friendliness that isn’t derivative of moral philosophy?
Additionally, many human preferences are almost certainly not moral… surely a key part of the project would be to find some way to separate the two. Preference satisfaction seems like a potentially very unfriendly goal...
If you want to build an unfriendly AI, you probably don’t need to solve the stability problem. If you have a consistently self-improving agent with unstable goals, it should eventually (a) reach an intelligence level where it could solve the stability problem if it wanted to, then (b) randomly arrive at goals that entail their own preservation, then (c) implement the stability solution before the self-preserving goals can get overwritten. You can delegate the stability problem to the AI itself. The reason this doesn’t generalize to friendly AI is that this process doesn’t provide any obvious way for humans to determine which goals the agent has at step (b).
The stability under self-modification is a core problem of AGI generally, isn’t it? So isn’t that an effort to solve AGI, not safety/friendliness (which would be fairly depressing given its stated goals)? Does MIRI have a way to define safety/friendliness that isn’t derivative of moral philosophy?
Additionally, many human preferences are almost certainly not moral… surely a key part of the project would be to find some way to separate the two. Preference satisfaction seems like a potentially very unfriendly goal...
If you want to build an unfriendly AI, you probably don’t need to solve the stability problem. If you have a consistently self-improving agent with unstable goals, it should eventually (a) reach an intelligence level where it could solve the stability problem if it wanted to, then (b) randomly arrive at goals that entail their own preservation, then (c) implement the stability solution before the self-preserving goals can get overwritten. You can delegate the stability problem to the AI itself. The reason this doesn’t generalize to friendly AI is that this process doesn’t provide any obvious way for humans to determine which goals the agent has at step (b).
Cheers thanks for the informative reply.