The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn’t.
(I’m a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)
Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of ‘mathematical system’.
Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.
Omohundro Basic AI Drives stuff
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
That sentence is magical thinking. You’re equating the meaning of the word “magic” in Clarke’s Law and in the expression “magical thinking”, which do not refer to the same thing.
I thought the expression ‘magical thinking’ was broad enough as to include fantasising about magic. I do think though that even in the meaning of ‘thinking by word association’ it happens a whole lot with futurism also, when the field is ill specified and the collisions between model and world are commonplace (as well as general confusion due to lack of specificity of the terms).
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
Ok, then, so the actual problem is that the people who worry about AIs that behave psychopatically have such a capable definition for AI that you consider them basically speaking nonsense?
The “sufficiently advanced” in their argumentations means “sufficiently advanced in the direction of making my argument true” and nothing more.
If I adopt pragmatic version of “advancedness”, then the software (algorithms) that are somehow magically made to* self identify with it’s computing substrate, is less advanced, unless it is also friendly or something.
we don’t know how to do that yet. edit: and some believe that it would just fall out of general smartness somehow, but i’m quite dubious about that.
The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn’t.
(I’m a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)
Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of ‘mathematical system’.
Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.
If the sufficiently advanced technology is indistinguishable from magic, the arguments about “sufficiently advanced AI system” in absence of actual definition what it is, are indistinguishable from magical thinking.
That sentence is magical thinking. You’re equating the meaning of the word “magic” in Clarke’s Law and in the expression “magical thinking”, which do not refer to the same thing.
I thought the expression ‘magical thinking’ was broad enough as to include fantasising about magic. I do think though that even in the meaning of ‘thinking by word association’ it happens a whole lot with futurism also, when the field is ill specified and the collisions between model and world are commonplace (as well as general confusion due to lack of specificity of the terms).
Ok, then, so the actual problem is that the people who worry about AIs that behave psychopatically have such a capable definition for AI that you consider them basically speaking nonsense?
The “sufficiently advanced” in their argumentations means “sufficiently advanced in the direction of making my argument true” and nothing more.
If I adopt pragmatic version of “advancedness”, then the software (algorithms) that are somehow magically made to* self identify with it’s computing substrate, is less advanced, unless it is also friendly or something.
we don’t know how to do that yet. edit: and some believe that it would just fall out of general smartness somehow, but i’m quite dubious about that.