the idea of a system that attempts to change its environment so as to maximize the prevalence of some X remains a useful idea.
The prevalence of X is defined how?
And if I extend the aspects of its environment that the system can manipulate to include its own hardware or software, or even just its own tuning parameters, it seems to me that there exists a perfectly crisp, measurable distinction between a system A that continues to increase the prevalence of X in its environment, and a system B that instead manipulates its own subsystems for measuring X.
In A, you confuse your model of the world with the world itself; in your model of the world you have a possible item ‘paperclip’, and you can therefore easily imagine maximization of number of paperclips inside your model of the world, complete with the AI necessarily trying to improve it’s understanding of the ‘world’ (your model). With B, you construct a falsely singular alternative of a rather broken AI, and see a crisp distinction between two irrelevant ideas.
The practical issue is that the ‘prevalence of some X’ can not be specified without the model of the world; you can not have a function without specifying it’s input domain, and the ‘reality’ is never an input domain of mathematical functions; the notion is not only incoherent but outright nonsensical.
If any part of that is as incoherent as you suggest, and you’re capable of pointing out the incoherence in a clear fashion, I would appreciate that.
Incoherence of so poorly defined concepts can not be demonstrated when no attempts has been made to make the notions specific enough to even rationally assert coherence in the first place.
The prevalence of X is defined how?
In A, you confuse your model of the world with the world itself; in your model of the world you have a possible item ‘paperclip’, and you can therefore easily imagine maximization of number of paperclips inside your model of the world, complete with the AI necessarily trying to improve it’s understanding of the ‘world’ (your model). With B, you construct a falsely singular alternative of a rather broken AI, and see a crisp distinction between two irrelevant ideas.
The practical issue is that the ‘prevalence of some X’ can not be specified without the model of the world; you can not have a function without specifying it’s input domain, and the ‘reality’ is never an input domain of mathematical functions; the notion is not only incoherent but outright nonsensical.
Incoherence of so poorly defined concepts can not be demonstrated when no attempts has been made to make the notions specific enough to even rationally assert coherence in the first place.
OK. Thanks for your time.