1) Perhaps you give it one domain and a utility function within that domain, and it returns a good action in this domain. Then you give it another domain and a different utility function, and it returns a good action in this domain. Basically I’m saying that it doesn’t maximize a single unified utility function.
2) You prove too much. This implies that the Unix cat program has a utility function (or else it is wasting effort). Technically you could view it as having a utility function of “1 if I output what the source code of cat outputs, 0 otherwise”, but this really isn’t a useful level of analysis. Also, if you’re going to go the route of assigning a silly utility function to this program, then this is a utility function over something like “memory states in an abstract virtual machine”, not “states of the universe”, so it will not necessarily (say) try to break out of its box to get more computation power.
On 2, we’re talking about things in the space of agents. Unix utilities are not agents.
But if you really want to go that route? You didn’t prove it wrong, just silly. The more agent-like the thing we’re talking about, the less silly it is.
I don’t think the connotations of “silly” are quite right here. You could still use this program to do quite a lot of useful inference and optimization across a variety of domains, without killing everyone. Sort of like how frequentist statistics can be very accurate in some cases despite being suboptimal by Bayesian standards. Bostrom mostly only talks about agent-like AIs, and while I think that this is mostly the right approach, he should have been more explicit about that. As I said before, we don’t currently know how to build agent-like AGIs at the moment because we haven’t solved the ontology mapping problem, but we do know how to build non-agentlike cross-domain optimizers given enough computation power.
Okay. We seem to be disputing definitions here. By your definition, it is totally possible to build a very good cross-domain optimizer without it being an agent (so it doesn’t optimize a utility function over the universe). It seems like we mostly agree on matters of fact.
1) Perhaps you give it one domain and a utility function within that domain, and it returns a good action in this domain. Then you give it another domain and a different utility function, and it returns a good action in this domain. Basically I’m saying that it doesn’t maximize a single unified utility function.
2) You prove too much. This implies that the Unix cat program has a utility function (or else it is wasting effort). Technically you could view it as having a utility function of “1 if I output what the source code of cat outputs, 0 otherwise”, but this really isn’t a useful level of analysis. Also, if you’re going to go the route of assigning a silly utility function to this program, then this is a utility function over something like “memory states in an abstract virtual machine”, not “states of the universe”, so it will not necessarily (say) try to break out of its box to get more computation power.
On 2, we’re talking about things in the space of agents. Unix utilities are not agents.
But if you really want to go that route? You didn’t prove it wrong, just silly. The more agent-like the thing we’re talking about, the less silly it is.
I don’t think the connotations of “silly” are quite right here. You could still use this program to do quite a lot of useful inference and optimization across a variety of domains, without killing everyone. Sort of like how frequentist statistics can be very accurate in some cases despite being suboptimal by Bayesian standards. Bostrom mostly only talks about agent-like AIs, and while I think that this is mostly the right approach, he should have been more explicit about that. As I said before, we don’t currently know how to build agent-like AGIs at the moment because we haven’t solved the ontology mapping problem, but we do know how to build non-agentlike cross-domain optimizers given enough computation power.
I don’t see how being able to using a non-agent program to do useful things means it’s not silly to say it has a utility function. It’s not an agent.
Okay. We seem to be disputing definitions here. By your definition, it is totally possible to build a very good cross-domain optimizer without it being an agent (so it doesn’t optimize a utility function over the universe). It seems like we mostly agree on matters of fact.