I am afraid that we have not precisely defined term goal. And I think we need it.
I am trying to analyse this term.
Do you think that todays computer’s have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction?
Probably I could more precisely describe my idea in other way:
In Bostrom’s book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary.
Could we think AI without goals but with subgoals?
One posibility could be if they will have “goal centre” externalized in human brain.
Could we think AI as tabula rasa, pure void in the begining after creation? Or AI could not exists without hardwired goals?
If they could be void—will be goal imprinted with first task?
Or with first task with word “please”? :)
About utility maximizer—human (or animal brain is not useless if it not grow without limit. And there is some tradeoff between gain and energy comsumption.
We have or could to think balanced processes. One dimensional, one directional, unbalanced utility function seems to have default outcome doom. But are the only choice?
How did that nature? (I am not talking about evolution but about DNA encoding)
Balance between “intelligent” neural tissues (SAI) and “stupid” non-neural (humanity). :)
Probably we have to see difference between purpose and B-goal (goal in Bostrom’s understanding).
If machine has to solve arithmetic equation it has to solve it and not destroy 7 planet to do it most perfect.
I have feeling that if you say “do it” Bostrom’s AI hear “do it maximally perfect”.
If you tell: “tell me how much is 2+2 (and do not destroy anything)” then she will destroy planet to be sure that nobody could stop her to answer how much is 2+2.
I am feeling that Bostrom is thinking that there is implicitly void AI in the begining and in next step there is AI with ultimate unchangeable goal. I am not sure if it is plausible. And I think that we need good definition or understanding about goal to know if it is plausible.
I am afraid that we have not precisely defined term goal. And I think we need it.
I am trying to analyse this term.
Do you think that todays computer’s have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction?
Probably I could more precisely describe my idea in other way: In Bostrom’s book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary.
Could we think AI without goals but with subgoals?
One posibility could be if they will have “goal centre” externalized in human brain.
Could we think AI as tabula rasa, pure void in the begining after creation? Or AI could not exists without hardwired goals?
If they could be void—will be goal imprinted with first task?
Or with first task with word “please”? :)
About utility maximizer—human (or animal brain is not useless if it not grow without limit. And there is some tradeoff between gain and energy comsumption.
We have or could to think balanced processes. One dimensional, one directional, unbalanced utility function seems to have default outcome doom. But are the only choice?
How did that nature? (I am not talking about evolution but about DNA encoding)
Balance between “intelligent” neural tissues (SAI) and “stupid” non-neural (humanity). :)
Probably we have to see difference between purpose and B-goal (goal in Bostrom’s understanding).
If machine has to solve arithmetic equation it has to solve it and not destroy 7 planet to do it most perfect.
I have feeling that if you say “do it” Bostrom’s AI hear “do it maximally perfect”.
If you tell: “tell me how much is 2+2 (and do not destroy anything)” then she will destroy planet to be sure that nobody could stop her to answer how much is 2+2.
I am feeling that Bostrom is thinking that there is implicitly void AI in the begining and in next step there is AI with ultimate unchangeable goal. I am not sure if it is plausible. And I think that we need good definition or understanding about goal to know if it is plausible.