You have inspired me to try something different here. I’m going to agree with you.
It is interesting though, in a forum where some people are very concerned with what it takes to put an objective function into reflective equilibrium, you point out that objective functions are rarely in “experiential equilibrium”—an agent can’t rule out that it might like something until it has tried it.
This brings to mind some scary ideas regarding the harm that might be done by an immature AI experimenting with its powers before it eventually achieves its adult “equilibrium.”
You have inspired me to try something different here. I’m going to agree with you.
It is interesting though, in a forum where some people are very concerned with what it takes to put an objective function into reflective equilibrium, you point out that objective functions are rarely in “experiential equilibrium”—an agent can’t rule out that it might like something until it has tried it.
This brings to mind some scary ideas regarding the harm that might be done by an immature AI experimenting with its powers before it eventually achieves its adult “equilibrium.”