You don’t define it as ability, you define it as ability plus some material goals themselves. Furthermore you imagine that super-intelligence will necessarily be able to maximize number of paperclips in the universe as terminal goal, whereas it is not at all necessarily the case that it is possible to specify that sort of goal. edit: that is to say, material goals are very difficult, cousin_it had some idea for the utilities for UDT, the UDT agent has to simulate entire multiverse (starting from big bang) and find instances of itself inside of it: http://lesswrong.com/lw/8ys/a_way_of_specifying_utility_functions_for_udt/ . It’s laughably hard to make a dangerous goal.
edit: that is to say, you focus on material goals (maybe for lack of understanding of any other goals). For example, the koo can try to find values for multiple variables describing a microchip, that result in maximum microchip performance. That’s easy goal to define. The baz would try to either attain some material state of the variables and registers of it’s hardware, resisting the shut-down, or outright try to attain the material goal of building a better CPU in reality. All the goal space you can even think of is but a tiny speck in the enormous space of possible goals. Un-interesting speck that is both hard to reach and is obviously counter productive.
You don’t define it as ability, you define it as ability plus some material goals themselves. Furthermore you imagine that super-intelligence will necessarily be able to maximize number of paperclips in the universe as terminal goal, whereas it is not at all necessarily the case that it is possible to specify that sort of goal. edit: that is to say, material goals are very difficult, cousin_it had some idea for the utilities for UDT, the UDT agent has to simulate entire multiverse (starting from big bang) and find instances of itself inside of it: http://lesswrong.com/lw/8ys/a_way_of_specifying_utility_functions_for_udt/ . It’s laughably hard to make a dangerous goal.
edit: that is to say, you focus on material goals (maybe for lack of understanding of any other goals). For example, the koo can try to find values for multiple variables describing a microchip, that result in maximum microchip performance. That’s easy goal to define. The baz would try to either attain some material state of the variables and registers of it’s hardware, resisting the shut-down, or outright try to attain the material goal of building a better CPU in reality. All the goal space you can even think of is but a tiny speck in the enormous space of possible goals. Un-interesting speck that is both hard to reach and is obviously counter productive.