Mathematically any value that AI can calculate from external anything is a function of sensory input.
Sure, but the kind of function matters for our purposes. That is, there’s a difference between an optimizing system that is designed to optimize for sensory input of a particular type, and a system that is designed to optimize for something that it currently treats sensory input of a particular type as evidence of, and that’s a difference I care about if I want that system to maximize the “something” rather than just rewire its own perceptions.
Be specific as of what is the input domain of the ‘function’ in question.
And yes, there is the difference: one is well defined and what is the AI research works towards, and other is part of extensive AI fear rationalization framework, where it is confused with the notion of generality of intelligence, as to presume that the practical AIs will maximize the “somethings”, followed by the notion that pretty much all “somethings” would be dangerous to maximize. The utility is a purely descriptive notion; the AI that decides on actions is a normative system.
edit: To clarify, the intelligence is defined here as ‘cross domain optimizer’ that would therefore be able to maximize something vague without it having to be coherently defined. It is similar to knights of the round table worrying that the AI would literally search for holy grail, because to said knights, abstract and ill defined goal of holy grail appears entirely natural; meanwhile for systems more intelligent than said knights such a confused goal, due to it’s incoherence, is impossible to define.
It seems to me that even if I ignore everything SI has to say about AI and existential risk and so on, ignore all the fear-mongering and etc., the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.
And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.
If any part of that is as incoherent as you suggest, and you’re capable of pointing out the incoherence in a clear fashion, I would appreciate that.
the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.
The prevalence of X is defined how?
And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.
In A, you confuse your model of the world with the world itself; in your model of the world you have a possible item ‘paperclip’, and you can therefore easily imagine maximization of number of paperclips inside your model of the world, complete with the AI necessarily trying to improve it’s understanding of the ‘world’ (your model). With B, you construct a falsely singular alternative of a rather broken AI, and see a crisp distinction between two irrelevant ideas.
The practical issue is that the ‘prevalence of some X’ can not be specified without the model of the world; you can not have a function without specifying it’s input domain, and the ‘reality’ is never an input domain of mathematical functions; the notion is not only incoherent but outright nonsensical.
If any part of that is as incoherent as you suggest, and you’re capable of pointing out the incoherence in a clear fashion, I would appreciate that.
Incoherence of so poorly defined concepts can not be demonstrated when no attempts has been made to make the notions specific enough to even rationally assert coherence in the first place.
Sure, but the kind of function matters for our purposes. That is, there’s a difference between an optimizing system that is designed to optimize for sensory input of a particular type, and a system that is designed to optimize for something that it currently treats sensory input of a particular type as evidence of, and that’s a difference I care about if I want that system to maximize the “something” rather than just rewire its own perceptions.
Be specific as of what is the input domain of the ‘function’ in question.
And yes, there is the difference: one is well defined and what is the AI research works towards, and other is part of extensive AI fear rationalization framework, where it is confused with the notion of generality of intelligence, as to presume that the practical AIs will maximize the “somethings”, followed by the notion that pretty much all “somethings” would be dangerous to maximize. The utility is a purely descriptive notion; the AI that decides on actions is a normative system.
edit: To clarify, the intelligence is defined here as ‘cross domain optimizer’ that would therefore be able to maximize something vague without it having to be coherently defined. It is similar to knights of the round table worrying that the AI would literally search for holy grail, because to said knights, abstract and ill defined goal of holy grail appears entirely natural; meanwhile for systems more intelligent than said knights such a confused goal, due to it’s incoherence, is impossible to define.
(shrug)
It seems to me that even if I ignore everything SI has to say about AI and existential risk and so on, ignore all the fear-mongering and etc., the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.
And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.
If any part of that is as incoherent as you suggest, and you’re capable of pointing out the incoherence in a clear fashion, I would appreciate that.
The prevalence of X is defined how?
In A, you confuse your model of the world with the world itself; in your model of the world you have a possible item ‘paperclip’, and you can therefore easily imagine maximization of number of paperclips inside your model of the world, complete with the AI necessarily trying to improve it’s understanding of the ‘world’ (your model). With B, you construct a falsely singular alternative of a rather broken AI, and see a crisp distinction between two irrelevant ideas.
The practical issue is that the ‘prevalence of some X’ can not be specified without the model of the world; you can not have a function without specifying it’s input domain, and the ‘reality’ is never an input domain of mathematical functions; the notion is not only incoherent but outright nonsensical.
Incoherence of so poorly defined concepts can not be demonstrated when no attempts has been made to make the notions specific enough to even rationally assert coherence in the first place.
OK. Thanks for your time.