The thing is that there’s nothing complicated or mysterious what so ever about having a self model. If I were to write autopilot, I would include flight simulator inside, to test the autopilot’s outputs and ensure that they don’t kill the passenger (me) *. I could go fancy and include the autopilot in simulation itself, as to ensure that autopilot does not put airplane into situation when autopilot can’t evade a collision.
Presto, a self-aware airplane, which is about as smart as a brain damaged fruit fly. It’s even aware of the autopilot inside the airplane.
If I were to write the chess AI, the chess AI is recursive and it tries a move and then ‘thinks’: what would it do in the next situation? Using the self as a self model.
Speaking of dogs, the boston dynamics BigDog robot, from what i know, includes model of it’s own physics. It is about as smart as a severely brain damaged cockroach.
So you end up with a lot of non-living things being self aware. Constantly self aware, whereas a case can be made that humans aren’t constantly self aware. The non-living things that are dumber than a cockroach being self aware.
edit: one could shift goalposts and require that the animal be capable of developing a self model; well, you can teach dog to balance on a rope, and balancing on a rope pretty much requires some form of model of body’s physics. You can also make a pretty stupid (dumber than cockroach) AI in a robot that would make a self model, not only of robot body but of the AI itself.
[ I never worked writing autopilots and from what I gather autopilots don’t generally include this on runtime but are tested on a simulator during development. I value my survival and don’t have grossly inflated view of my coding abilities, so I’d add that simulator, and make a real loud alarm to wake the pilot if anything goes wrong. An example of me using self model to improve my survival. From what I can see other programmers that I know, many in beginning have inflated view of their coding abilities which keeps biting them in the backside all day long, until they get it, perhaps becoming more self aware ].
So you end up with a lot of non-living things being self aware.
In this sense, self-awareness is easy, the question is awareness of what exactly, and how it is used.
Awareness of one’s body position is less interesting, it can be only used for movement. For a biological social species awareness of one’s behavior and mind probably leads to improved algorithms—perhaps is it necessary for some kind of learning.
I am not sure what benefits would self-awareness bring to a machine… and maybe it depends on its construction and algorithm. For example when a machine has task to compute something, a non-self-aware machine would just compute it, but a self-aware machine might realize that with more memory and faster CPU it could do the calculations better.
Yea. Well, here you enter realm of general intelligence—the general intelligence would just look at the world, see itself, and figure things out including the presence of self and such.
I’m not convinced that it’s how it usually works for h. sapiens. I don’t believe that we are self aware as function of general intelligence, there’s why: we tend to have serious discussion of things like a philosophical zombie. The philosophical zombie is a failure to recognize the physical item that is self as self. I seriously think we’re just hardcoded self aware—we perceive some of our thought processes in similar way to how we perceive external world. This confuses the hell out of people, to the point that they fail to recognize themselves in a physical system (hence p-zombies).
Some details about how exactly it works for homo sapiens could be found in works of Vygotsky and Piaget—they did some cool experiments about what kind of reasoning is human child generally able at what age. Some models need time and experience to develop, though maybe we have some hardware support that makes it click faster. For example at some age children start to understand the conservation of momentum (when an interesting object disappears behind a barrier, they no longer look at the point where it disappeared, but at the opposite side of the barrier, where it should appear). At some age children start to understand that their knowledge is different from other people’s knowledge (a child is shown some structure from both sides, another person only from one side, and a child has to say which parts of structure did the other person see). So our models develop gradually.
Modelling thinking is difficult, because we cannot directly observe the thoughts of others, and the act of observing interferes with what is being observed. There are techniques that help. It is difficult to recognize oneself as a physical system, when one doesn’t know how exactly does the system work. If I wouldn’t have any information about how brain works, what reason would I have to believe that my mind is a fuction of my brain? My muscles are moving and I can see their shapes under my skin, but I never observe a brain in action. In a similar way, by observing a robot you would understand the wheels and motors, but not the software and the non-moving parts of hardware; even if you were that robot.
That’s one good definition.
The thing is that there’s nothing complicated or mysterious what so ever about having a self model. If I were to write autopilot, I would include flight simulator inside, to test the autopilot’s outputs and ensure that they don’t kill the passenger (me) *. I could go fancy and include the autopilot in simulation itself, as to ensure that autopilot does not put airplane into situation when autopilot can’t evade a collision.
Presto, a self-aware airplane, which is about as smart as a brain damaged fruit fly. It’s even aware of the autopilot inside the airplane.
If I were to write the chess AI, the chess AI is recursive and it tries a move and then ‘thinks’: what would it do in the next situation? Using the self as a self model.
Speaking of dogs, the boston dynamics BigDog robot, from what i know, includes model of it’s own physics. It is about as smart as a severely brain damaged cockroach.
So you end up with a lot of non-living things being self aware. Constantly self aware, whereas a case can be made that humans aren’t constantly self aware. The non-living things that are dumber than a cockroach being self aware.
edit: one could shift goalposts and require that the animal be capable of developing a self model; well, you can teach dog to balance on a rope, and balancing on a rope pretty much requires some form of model of body’s physics. You can also make a pretty stupid (dumber than cockroach) AI in a robot that would make a self model, not only of robot body but of the AI itself.
[ I never worked writing autopilots and from what I gather autopilots don’t generally include this on runtime but are tested on a simulator during development. I value my survival and don’t have grossly inflated view of my coding abilities, so I’d add that simulator, and make a real loud alarm to wake the pilot if anything goes wrong. An example of me using self model to improve my survival. From what I can see other programmers that I know, many in beginning have inflated view of their coding abilities which keeps biting them in the backside all day long, until they get it, perhaps becoming more self aware ].
In this sense, self-awareness is easy, the question is awareness of what exactly, and how it is used.
Awareness of one’s body position is less interesting, it can be only used for movement. For a biological social species awareness of one’s behavior and mind probably leads to improved algorithms—perhaps is it necessary for some kind of learning.
I am not sure what benefits would self-awareness bring to a machine… and maybe it depends on its construction and algorithm. For example when a machine has task to compute something, a non-self-aware machine would just compute it, but a self-aware machine might realize that with more memory and faster CPU it could do the calculations better.
Yea. Well, here you enter realm of general intelligence—the general intelligence would just look at the world, see itself, and figure things out including the presence of self and such.
I’m not convinced that it’s how it usually works for h. sapiens. I don’t believe that we are self aware as function of general intelligence, there’s why: we tend to have serious discussion of things like a philosophical zombie. The philosophical zombie is a failure to recognize the physical item that is self as self. I seriously think we’re just hardcoded self aware—we perceive some of our thought processes in similar way to how we perceive external world. This confuses the hell out of people, to the point that they fail to recognize themselves in a physical system (hence p-zombies).
Some details about how exactly it works for homo sapiens could be found in works of Vygotsky and Piaget—they did some cool experiments about what kind of reasoning is human child generally able at what age. Some models need time and experience to develop, though maybe we have some hardware support that makes it click faster. For example at some age children start to understand the conservation of momentum (when an interesting object disappears behind a barrier, they no longer look at the point where it disappeared, but at the opposite side of the barrier, where it should appear). At some age children start to understand that their knowledge is different from other people’s knowledge (a child is shown some structure from both sides, another person only from one side, and a child has to say which parts of structure did the other person see). So our models develop gradually.
Modelling thinking is difficult, because we cannot directly observe the thoughts of others, and the act of observing interferes with what is being observed. There are techniques that help. It is difficult to recognize oneself as a physical system, when one doesn’t know how exactly does the system work. If I wouldn’t have any information about how brain works, what reason would I have to believe that my mind is a fuction of my brain? My muscles are moving and I can see their shapes under my skin, but I never observe a brain in action. In a similar way, by observing a robot you would understand the wheels and motors, but not the software and the non-moving parts of hardware; even if you were that robot.