This isn’t responsive to Kaj’s question. In this scenario, the AGI systems don’t need humans (you’re not describing a loss in the event of humans going extinct), they preserve them as a side effect of other processes.
In this scenario, the AGI systems don’t need humans (you’re not describing a loss in the event of humans going extinct),
Humans are a type of node of AGI. The AGI needs it’s own nodes (and protocols and other stuff that makes the AGI be itself). It’s not the typical AGI desire, I know—it is slightly complicated—there’s more than 1 step in the thought here.
My view is much more engineering than philosophical perspective. Some degree of ‘friendliness’ (and not in the sense of psychopathic benefactor who will lie and mislead and manipulate you to make your life better, but in the sense of trust) is necessary for intellectual cooperation of AI’s nodes. I think your problem is that you define intelligence very narrowly as something which works to directly fitful material needs, goals, etc. and the smarter it is the closer it must be modelled by that ideal (you somewhere lost the concept of winning the most and replaced it with something else). Very impoverished thinking, when combined with ontologically basic ‘intelligence’ that doesn’t really consist of components (with the ‘don’t build some of the most difficult components’ as a solution to problem).
Let baz = SI’s meaning of AI. Let koo=industry’s meaning of AI. Bazs are incredibly dangerous, and must not be created, i’ll take SI’s word for it. The baz requires a lot of extra work compared to useful koo (philosophy of mind work that we can’t get a koo to do), the work that is clearly at best pointless, at worst dangerous, and definitely difficult; one doesn’t need to envision foom and destruction of mankind to avoid implementing extra things that are both hard to make and can only serve to make the software less useful. The only people I know of who want to build a provably friendly baz, is SI. They also call their baz a koo, for sake of soliciting donations for work on baz and for sake of convincing people koo is as dangerous as baz (which they achieve by not caring enough to see the distinction). The SI team acts as a non friendly baz would.
edit: to clarify on the trust friendliness, you don’t want a node to model what other nodes would do, that’d duplicated computation; this rules out straightforward utilitarian consequentialism as practically relevant foundational principle because the consequences are not being modelled.
I think your problem is that you define intelligence very narrowly as something which works to directly fitful material needs, goals, etc. and the smarter it is the closer it must be modelled by that ideal (you somewhere lost the concept of winning the most and replaced it with something else).
Defining intelligence as the ability to achieve arbitrary goals is a very narrow definition? What’s the broader one?
You don’t define it as ability, you define it as ability plus some material goals themselves. Furthermore you imagine that super-intelligence will necessarily be able to maximize number of paperclips in the universe as terminal goal, whereas it is not at all necessarily the case that it is possible to specify that sort of goal. edit: that is to say, material goals are very difficult, cousin_it had some idea for the utilities for UDT, the UDT agent has to simulate entire multiverse (starting from big bang) and find instances of itself inside of it: http://lesswrong.com/lw/8ys/a_way_of_specifying_utility_functions_for_udt/ . It’s laughably hard to make a dangerous goal.
edit: that is to say, you focus on material goals (maybe for lack of understanding of any other goals). For example, the koo can try to find values for multiple variables describing a microchip, that result in maximum microchip performance. That’s easy goal to define. The baz would try to either attain some material state of the variables and registers of it’s hardware, resisting the shut-down, or outright try to attain the material goal of building a better CPU in reality. All the goal space you can even think of is but a tiny speck in the enormous space of possible goals. Un-interesting speck that is both hard to reach and is obviously counter productive.
This isn’t responsive to Kaj’s question. In this scenario, the AGI systems don’t need humans (you’re not describing a loss in the event of humans going extinct), they preserve them as a side effect of other processes.
Humans are a type of node of AGI. The AGI needs it’s own nodes (and protocols and other stuff that makes the AGI be itself). It’s not the typical AGI desire, I know—it is slightly complicated—there’s more than 1 step in the thought here.
This sounds a little like Heylighen’s Global Brain argument against the Singularity. We mention it, though not under the “they will need us” heading.
My view is much more engineering than philosophical perspective. Some degree of ‘friendliness’ (and not in the sense of psychopathic benefactor who will lie and mislead and manipulate you to make your life better, but in the sense of trust) is necessary for intellectual cooperation of AI’s nodes. I think your problem is that you define intelligence very narrowly as something which works to directly fitful material needs, goals, etc. and the smarter it is the closer it must be modelled by that ideal (you somewhere lost the concept of winning the most and replaced it with something else). Very impoverished thinking, when combined with ontologically basic ‘intelligence’ that doesn’t really consist of components (with the ‘don’t build some of the most difficult components’ as a solution to problem).
Let baz = SI’s meaning of AI. Let koo=industry’s meaning of AI. Bazs are incredibly dangerous, and must not be created, i’ll take SI’s word for it. The baz requires a lot of extra work compared to useful koo (philosophy of mind work that we can’t get a koo to do), the work that is clearly at best pointless, at worst dangerous, and definitely difficult; one doesn’t need to envision foom and destruction of mankind to avoid implementing extra things that are both hard to make and can only serve to make the software less useful. The only people I know of who want to build a provably friendly baz, is SI. They also call their baz a koo, for sake of soliciting donations for work on baz and for sake of convincing people koo is as dangerous as baz (which they achieve by not caring enough to see the distinction). The SI team acts as a non friendly baz would.
edit: to clarify on the trust friendliness, you don’t want a node to model what other nodes would do, that’d duplicated computation; this rules out straightforward utilitarian consequentialism as practically relevant foundational principle because the consequences are not being modelled.
Defining intelligence as the ability to achieve arbitrary goals is a very narrow definition? What’s the broader one?
You don’t define it as ability, you define it as ability plus some material goals themselves. Furthermore you imagine that super-intelligence will necessarily be able to maximize number of paperclips in the universe as terminal goal, whereas it is not at all necessarily the case that it is possible to specify that sort of goal. edit: that is to say, material goals are very difficult, cousin_it had some idea for the utilities for UDT, the UDT agent has to simulate entire multiverse (starting from big bang) and find instances of itself inside of it: http://lesswrong.com/lw/8ys/a_way_of_specifying_utility_functions_for_udt/ . It’s laughably hard to make a dangerous goal.
edit: that is to say, you focus on material goals (maybe for lack of understanding of any other goals). For example, the koo can try to find values for multiple variables describing a microchip, that result in maximum microchip performance. That’s easy goal to define. The baz would try to either attain some material state of the variables and registers of it’s hardware, resisting the shut-down, or outright try to attain the material goal of building a better CPU in reality. All the goal space you can even think of is but a tiny speck in the enormous space of possible goals. Un-interesting speck that is both hard to reach and is obviously counter productive.