Seems to me an AI without goals wouldn’t do anything, so I don’t see it as being particularly dangerous. It would take no actions and have no reactions, which would render it perfectly safe. However, it would also render the AI perfectly useless—and it might even be nonsensical to consider such an entity “intelligent”. Even if it possessed some kind of untapped intelligence, without goals that would manifest as behavior, we’d never have any way to even know it was intelligent.
The question about utility maximization is harder to answer. But I think all agents that accomplish goals can be described as utility maximizers regardless of their internal workings; if so, that (together with what I said in the last paragraph) implies that an AI that doesn’t maximize utility would be useless and (for all intents and purposes) unintelligent. It would simply do nothing.
I am afraid that we have not precisely defined term goal. And I think we need it.
I am trying to analyse this term.
Do you think that todays computer’s have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction?
Probably I could more precisely describe my idea in other way:
In Bostrom’s book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary.
Could we think AI without goals but with subgoals?
One posibility could be if they will have “goal centre” externalized in human brain.
Could we think AI as tabula rasa, pure void in the begining after creation? Or AI could not exists without hardwired goals?
If they could be void—will be goal imprinted with first task?
Or with first task with word “please”? :)
About utility maximizer—human (or animal brain is not useless if it not grow without limit. And there is some tradeoff between gain and energy comsumption.
We have or could to think balanced processes. One dimensional, one directional, unbalanced utility function seems to have default outcome doom. But are the only choice?
How did that nature? (I am not talking about evolution but about DNA encoding)
Balance between “intelligent” neural tissues (SAI) and “stupid” non-neural (humanity). :)
Probably we have to see difference between purpose and B-goal (goal in Bostrom’s understanding).
If machine has to solve arithmetic equation it has to solve it and not destroy 7 planet to do it most perfect.
I have feeling that if you say “do it” Bostrom’s AI hear “do it maximally perfect”.
If you tell: “tell me how much is 2+2 (and do not destroy anything)” then she will destroy planet to be sure that nobody could stop her to answer how much is 2+2.
I am feeling that Bostrom is thinking that there is implicitly void AI in the begining and in next step there is AI with ultimate unchangeable goal. I am not sure if it is plausible. And I think that we need good definition or understanding about goal to know if it is plausible.
Seems to me an AI without goals wouldn’t do anything, so I don’t see it as being particularly dangerous. It would take no actions and have no reactions, which would render it perfectly safe. However, it would also render the AI perfectly useless—and it might even be nonsensical to consider such an entity “intelligent”. Even if it possessed some kind of untapped intelligence, without goals that would manifest as behavior, we’d never have any way to even know it was intelligent.
The question about utility maximization is harder to answer. But I think all agents that accomplish goals can be described as utility maximizers regardless of their internal workings; if so, that (together with what I said in the last paragraph) implies that an AI that doesn’t maximize utility would be useless and (for all intents and purposes) unintelligent. It would simply do nothing.
I am afraid that we have not precisely defined term goal. And I think we need it.
I am trying to analyse this term.
Do you think that todays computer’s have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction?
Probably I could more precisely describe my idea in other way: In Bostrom’s book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary.
Could we think AI without goals but with subgoals?
One posibility could be if they will have “goal centre” externalized in human brain.
Could we think AI as tabula rasa, pure void in the begining after creation? Or AI could not exists without hardwired goals?
If they could be void—will be goal imprinted with first task?
Or with first task with word “please”? :)
About utility maximizer—human (or animal brain is not useless if it not grow without limit. And there is some tradeoff between gain and energy comsumption.
We have or could to think balanced processes. One dimensional, one directional, unbalanced utility function seems to have default outcome doom. But are the only choice?
How did that nature? (I am not talking about evolution but about DNA encoding)
Balance between “intelligent” neural tissues (SAI) and “stupid” non-neural (humanity). :)
Probably we have to see difference between purpose and B-goal (goal in Bostrom’s understanding).
If machine has to solve arithmetic equation it has to solve it and not destroy 7 planet to do it most perfect.
I have feeling that if you say “do it” Bostrom’s AI hear “do it maximally perfect”.
If you tell: “tell me how much is 2+2 (and do not destroy anything)” then she will destroy planet to be sure that nobody could stop her to answer how much is 2+2.
I am feeling that Bostrom is thinking that there is implicitly void AI in the begining and in next step there is AI with ultimate unchangeable goal. I am not sure if it is plausible. And I think that we need good definition or understanding about goal to know if it is plausible.