Seriously, why should anyone think that SI is anything more than “narcissistic dilettantes who think they need to teach their awesome big picture ideas to the mere technicians that are creating the future”, to paraphrase one of my friends?
re: sex life, nothing wrong with it per se, but consider that there’s things like psychopathy checklist where you score points for basically talking people into giving you money, for being admired beyond accomplishments, and for sexual promiscuity also. On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn’t, just outlining how many people think.
On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn’t, just outlining how many people think.
This doesn’t seem to happen when people note that when you look at corporations as intentional agents, they behave like human psychopaths. The reasoning is even pretty similar to the case for AIs, corporations exhibit basic rational behavior but mostly lack whatever special sauce individual humans have that makes them be a bit more prosocial.
Well, the intelligence in general can be much more alien than this.
Consider an AI that, given any mathematical model of a system and some ‘value’ metric, finds optimum parameters for object in a system. E.g. the system could be Navier-Stokes equations and a wing, the wing shape may be the parameter, and some metric of drag and lift of the wing can be the value to maximize, and the AI would do all that’s necessary including figuring out how to simulate those equations efficiently.
Or the system could be general relativity and quantum mechanics, the parameter could be a theory of everything equation, and some metric of inelegance has to be minimized.
That’s the sort of thing that scientists tend to see as ‘intelligent’.
The AI, however, did acquire plenty of connotations from science fiction, whereby it is very anthropomorphic.
Those are narrow AIs. Their behavior doesn’t involve acquiring resources from the outside world and autonomously developing better ways to do that. That’s the part that might lead to psychopath-like behavior.
Specializing the algorithm to outside world and to particular philosophy of value does not make it broader, or more intelligent, only more anthropomorphic (and less useful, if you dont believe in friendliness).
The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn’t.
(I’m a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)
Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of ‘mathematical system’.
Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.
Omohundro Basic AI Drives stuff
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
That sentence is magical thinking. You’re equating the meaning of the word “magic” in Clarke’s Law and in the expression “magical thinking”, which do not refer to the same thing.
I thought the expression ‘magical thinking’ was broad enough as to include fantasising about magic. I do think though that even in the meaning of ‘thinking by word association’ it happens a whole lot with futurism also, when the field is ill specified and the collisions between model and world are commonplace (as well as general confusion due to lack of specificity of the terms).
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
Ok, then, so the actual problem is that the people who worry about AIs that behave psychopatically have such a capable definition for AI that you consider them basically speaking nonsense?
The “sufficiently advanced” in their argumentations means “sufficiently advanced in the direction of making my argument true” and nothing more.
If I adopt pragmatic version of “advancedness”, then the software (algorithms) that are somehow magically made to* self identify with it’s computing substrate, is less advanced, unless it is also friendly or something.
we don’t know how to do that yet. edit: and some believe that it would just fall out of general smartness somehow, but i’m quite dubious about that.
Yeah.
Seriously, why should anyone think that SI is anything more than “narcissistic dilettantes who think they need to teach their awesome big picture ideas to the mere technicians that are creating the future”, to paraphrase one of my friends?
This is pretty damn illuminating:
http://lesswrong.com/lw/9gy/the_singularity_institutes_arrogance_problem/5p6a
re: sex life, nothing wrong with it per se, but consider that there’s things like psychopathy checklist where you score points for basically talking people into giving you money, for being admired beyond accomplishments, and for sexual promiscuity also. On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn’t, just outlining how many people think.
This doesn’t seem to happen when people note that when you look at corporations as intentional agents, they behave like human psychopaths. The reasoning is even pretty similar to the case for AIs, corporations exhibit basic rational behavior but mostly lack whatever special sauce individual humans have that makes them be a bit more prosocial.
Well, the intelligence in general can be much more alien than this.
Consider an AI that, given any mathematical model of a system and some ‘value’ metric, finds optimum parameters for object in a system. E.g. the system could be Navier-Stokes equations and a wing, the wing shape may be the parameter, and some metric of drag and lift of the wing can be the value to maximize, and the AI would do all that’s necessary including figuring out how to simulate those equations efficiently.
Or the system could be general relativity and quantum mechanics, the parameter could be a theory of everything equation, and some metric of inelegance has to be minimized.
That’s the sort of thing that scientists tend to see as ‘intelligent’.
The AI, however, did acquire plenty of connotations from science fiction, whereby it is very anthropomorphic.
Those are narrow AIs. Their behavior doesn’t involve acquiring resources from the outside world and autonomously developing better ways to do that. That’s the part that might lead to psychopath-like behavior.
Specializing the algorithm to outside world and to particular philosophy of value does not make it broader, or more intelligent, only more anthropomorphic (and less useful, if you dont believe in friendliness).
The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn’t.
(I’m a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)
Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of ‘mathematical system’.
Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
That sentence is magical thinking. You’re equating the meaning of the word “magic” in Clarke’s Law and in the expression “magical thinking”, which do not refer to the same thing.
I thought the expression ‘magical thinking’ was broad enough as to include fantasising about magic. I do think though that even in the meaning of ‘thinking by word association’ it happens a whole lot with futurism also, when the field is ill specified and the collisions between model and world are commonplace (as well as general confusion due to lack of specificity of the terms).
Ok, then, so the actual problem is that the people who worry about AIs that behave psychopatically have such a capable definition for AI that you consider them basically speaking nonsense?
The “sufficiently advanced” in their argumentations means “sufficiently advanced in the direction of making my argument true” and nothing more.
If I adopt pragmatic version of “advancedness”, then the software (algorithms) that are somehow magically made to* self identify with it’s computing substrate, is less advanced, unless it is also friendly or something.
we don’t know how to do that yet. edit: and some believe that it would just fall out of general smartness somehow, but i’m quite dubious about that.