The most stupid/incompetent part of LW AI belief cluster is it not understanding that ‘the number of paperclips in the territory as far as you know’ will require some sort of mathematical definition of paperclip in the territory, along with a lot of other stuff so far only defined in words (map territory distinction, and I don’t mean the distinction between number of paperclips in the world model and number of paperclips in the real world, I mean the fuzzy idiocy that arises from the people whom are babbling about map and territory themselves not actually implementing the map territory distinction and not understanding that real world ‘paperclips’ can only be in some sort of map of the real world because the real world haven’t got any high level object called ‘paperclip’ ). [Or not understanding how involved such a definition would be]
And then again, the AI is trying to maximize number of this mathematical definition of paperclip in the mathematical definition of territory, which, the way applied math is, would have other solutions than those matching English technobabble.
I don’t see how UDT gets you anywhere closer (and if I seen that it would, I would be even more against SI because this is precisely the research for creating the dangerous AI, set up by a philosopher who has been given access to funds to hire qualified people to do something that’s entirely pointless and only creates risk where there was none)
edit: to clarify on the map territory distinction. Understanding the distinction does not change the fact that multiple world states are mapped to one goal state, in the goal-definition itself, and are not distinguished by the goal-definition.
From what I can see, there’s thorough confusion between ‘understanding map-territory distinction’ in the sense of understanding the logic of map and territory being distinct and the mapping being lossy, and the ‘understanding map-territory distinction’ in the loose sense like ‘understanding how to drive a car’, i.e. in the sense of somehow distinguishing the real world states that are mapped to same map state, and preferring across them.
Or not understanding how involved such a definition would be
Why do you think I used the word “non-trivial”? Are you not aware that in technical fields “non-trivial” means “difficult”?
if I seen that it would, I would be even more against SI because this is precisely the research for creating the dangerous AI, set up by a philosopher who has been given access to funds to hire qualified people to do something that’s entirely pointless and only creates risk where there was none
It’s dangerous because it’s more powerful than other types of AI? If so, why would it be “entirely pointless”, and why do you think other AI researchers won’t eventually invent the same ideas (which seems to be implied by “creates risk where there was none”)?
In case you weren’t aware, I myself have arguedagainst SIAI pushing forward decision theory at this time, so I’m not trying to undermine your conclusion but just find your argument wrong, or at least confusing.
Why do you think I used the word “non-trivial”? Are you not aware that in technical fields “non-trivial” means “difficult”?
I didn’t state disagreement with you. I stated my disdain for most of LW community which just glosses it out as a detail not worth discussing. edit: or worse yet as inherent part of any ‘AI’.
It’s dangerous because it’s more powerful than other types of AI?
“Powerful” is a bad concept. I wouldn’t expect it to be a better problem solver for things like ‘how to make a better microchip’, but perhaps it could be a better problem solver for ‘how to hack internet’ because it is unethical but can come up with the idea and be ‘motivated’ to do it, while others aren’t. (I do not think that UDT is relevant to the difficult issues there—fortunately)
and why do you think other AI researchers won’t eventually invent the same ideas
The ideas in question (to the extent to which they are developed by SI so far) are trivial. They are also entirely useless for solving problems like how to make a better microchip, or how to drive a car. I do not expect non-SI funded research into automated problem solving to try to work out this kind of stuff, due to it’s uselessness. (note: the implementation of such ideas would be highly non trivial for anything like ‘real world paperclips with the intelligence module not solving the problem of breaking the paperclip counter’).
The most stupid/incompetent part of LW AI belief cluster is it not understanding that ‘the number of paperclips in the territory as far as you know’ will require some sort of mathematical definition of paperclip in the territory, along with a lot of other stuff so far only defined in words (map territory distinction, and I don’t mean the distinction between number of paperclips in the world model and number of paperclips in the real world, I mean the fuzzy idiocy that arises from the people whom are babbling about map and territory themselves not actually implementing the map territory distinction and not understanding that real world ‘paperclips’ can only be in some sort of map of the real world because the real world haven’t got any high level object called ‘paperclip’ ). [Or not understanding how involved such a definition would be]
And then again, the AI is trying to maximize number of this mathematical definition of paperclip in the mathematical definition of territory, which, the way applied math is, would have other solutions than those matching English technobabble.
I don’t see how UDT gets you anywhere closer (and if I seen that it would, I would be even more against SI because this is precisely the research for creating the dangerous AI, set up by a philosopher who has been given access to funds to hire qualified people to do something that’s entirely pointless and only creates risk where there was none)
edit: to clarify on the map territory distinction. Understanding the distinction does not change the fact that multiple world states are mapped to one goal state, in the goal-definition itself, and are not distinguished by the goal-definition.
From what I can see, there’s thorough confusion between ‘understanding map-territory distinction’ in the sense of understanding the logic of map and territory being distinct and the mapping being lossy, and the ‘understanding map-territory distinction’ in the loose sense like ‘understanding how to drive a car’, i.e. in the sense of somehow distinguishing the real world states that are mapped to same map state, and preferring across them.
Why do you think I used the word “non-trivial”? Are you not aware that in technical fields “non-trivial” means “difficult”?
It’s dangerous because it’s more powerful than other types of AI? If so, why would it be “entirely pointless”, and why do you think other AI researchers won’t eventually invent the same ideas (which seems to be implied by “creates risk where there was none”)?
In case you weren’t aware, I myself have argued against SIAI pushing forward decision theory at this time, so I’m not trying to undermine your conclusion but just find your argument wrong, or at least confusing.
I didn’t state disagreement with you. I stated my disdain for most of LW community which just glosses it out as a detail not worth discussing. edit: or worse yet as inherent part of any ‘AI’.
“Powerful” is a bad concept. I wouldn’t expect it to be a better problem solver for things like ‘how to make a better microchip’, but perhaps it could be a better problem solver for ‘how to hack internet’ because it is unethical but can come up with the idea and be ‘motivated’ to do it, while others aren’t. (I do not think that UDT is relevant to the difficult issues there—fortunately)
The ideas in question (to the extent to which they are developed by SI so far) are trivial. They are also entirely useless for solving problems like how to make a better microchip, or how to drive a car. I do not expect non-SI funded research into automated problem solving to try to work out this kind of stuff, due to it’s uselessness. (note: the implementation of such ideas would be highly non trivial for anything like ‘real world paperclips with the intelligence module not solving the problem of breaking the paperclip counter’).