Suppose the universe is as we know it now, except that aliens definitely don’t exist, and the only living organism in the universe is a single human named Steve. Steve really wants to create a cheesecake the size of Pluto. Apparently, the universe is more intelligent than Steve, and does not want such a cheesecake to exist.
Perhaps this “general, universal utility” is what would happen if the abilities of things we think of as intelligent were magnified.
I was thinking about what general, universal utility would look like. I managed to tie myself into an interesting mental knot.
I started with: Things occurring as intelligent agents would prefer.
If preferences conflict, weight preferences by the intelligence of the preferring agent.
Define intelligent agents as optimization processes.
Define relative intelligences as the relative optimization strengths of the processes
Define a preference as something an agent optimizes for.
Then, I realized that my definition was a descriptive prediction of events.
Suppose the universe is as we know it now, except that aliens definitely don’t exist, and the only living organism in the universe is a single human named Steve. Steve really wants to create a cheesecake the size of Pluto. Apparently, the universe is more intelligent than Steve, and does not want such a cheesecake to exist.
Perhaps this “general, universal utility” is what would happen if the abilities of things we think of as intelligent were magnified.
True. As a prediction it does not account for initial resources. Touche.