The concept of agent is logically inconsistent with the General Intelligence Algorithm. What you are trying to refer to with Agent/tool etc are just GIA instances with slightly different parameters, inputs, and outputs.
Even if it could be logically extended to the point of “Not even wrong” it would just be a convoluted way of looking at it.
ohhhh… sorry… There is really only one, and everything else is derived from it. Familiarity. Any other values would depend on the input, output and parameters. However familiarity is inconsistent with the act of killing familiar things. The concern comes in when something else causes the instance to lose access to something it is familiar with, and the instance decides it can just force that to not happen.
Well, I’m not sure that Familiarity is sufficient to resolve every choice faced by a GIA—for example, how does one derive a reasonable definition of self-defense from Familiarity. But let’s leave that aside for a moment.
Why must a GIA subscribe to the value of Familiarity?
Because it is the proxy for survival. You cannot avoid something you by definition cannot have any memory of (nor could have your ancestors)
Self Defense of course requires first fear of loss (aversion to loss is integral, fear and will to stop it is not), awareness of self and then awareness that certain actions could cause loss of self.
I’m not at all sure I understand what you mean. I don’t see the connection between familiarity and survival. Moreover, Not all general intelligences will be interested in survival.
Familiar things didn’t kill you. No, they are interested in familiarity. I just said that. It is rare but possible for a need for familiarity (as defined mathematically instead of linguistically) to result in sacrifice of a GIA instance’s self...
The concept of agent is logically inconsistent with the General Intelligence Algorithm. What you are trying to refer to with Agent/tool etc are just GIA instances with slightly different parameters, inputs, and outputs.
Even if it could be logically extended to the point of “Not even wrong” it would just be a convoluted way of looking at it.
I’m sorry, I wasn’t trying to use terminology to misstate your position.
What are three values that a GIA must have, and why must they have them?
ohhhh… sorry… There is really only one, and everything else is derived from it. Familiarity. Any other values would depend on the input, output and parameters. However familiarity is inconsistent with the act of killing familiar things. The concern comes in when something else causes the instance to lose access to something it is familiar with, and the instance decides it can just force that to not happen.
Well, I’m not sure that Familiarity is sufficient to resolve every choice faced by a GIA—for example, how does one derive a reasonable definition of self-defense from Familiarity. But let’s leave that aside for a moment.
Why must a GIA subscribe to the value of Familiarity?
Because it is the proxy for survival. You cannot avoid something you by definition cannot have any memory of (nor could have your ancestors)
Self Defense of course requires first fear of loss (aversion to loss is integral, fear and will to stop it is not), awareness of self and then awareness that certain actions could cause loss of self.
I’m not at all sure I understand what you mean. I don’t see the connection between familiarity and survival. Moreover, Not all general intelligences will be interested in survival.
Familiar things didn’t kill you. No, they are interested in familiarity. I just said that. It is rare but possible for a need for familiarity (as defined mathematically instead of linguistically) to result in sacrifice of a GIA instance’s self...
“I Have No Mouth, and I Must Scream”.