All I supposed was that the AI was doing something.
Can you be more specific? I have an AI that’s iterating parameters to some strange attractor—defined within it—until it finds unusual behaviour. I can make the AI that would hillclimb+search for the improvements to the former AI. edit: Now, the worst thing that can happen, it makes mind hack image that kills everyone who looks at it. That wasn’t the intent, but the ‘unusual behaviour’ might get too unusual for human brain to handle. Is that a serious risk? No it’s a laughable one.
Implicit in my setup was that the AI reached the point where it was having noticeable macroscopic effects on our world. This is obviously easiest when the AI’s substrate has some built-in capacity for input/output. If we’re being really generous, it might have an autonomous body, cameras, an internet connection, etc. If we’re being stingy, it might just be an isolated process running on a computer with its inputs limited to checking the wall-clock time and outputs limited to whatever physical effects it has on the CPU running it. In the latter case, doing something to the external world may be very difficult but not impossible.
The program you have doing local search in your example doesn’t sound like an AI; even if you stuck it in the autonomous body, it wouldn’t do anything to the world that’s not a generic side-effect of its running. No one would describe it as maximizing anything.
Well, it is maximizing what ever I defined for it to maximize, usefully for me, and in a way that is practical. In any case, you said, “All I supposed was that the AI was doing something.” . My AI is doing something.
This is obviously easiest when the AI’s substrate has some built-in capacity for input/output. If we’re being really generous, it might have an autonomous body, cameras, an internet connection, etc.
Yea, and it’s rolling forward and clamping it’s manipulators until they wear out. Clearly you want it to maximize something in the real world, not just do something. The issue is that the only things it can do approximately this way is shooting at colour blue or the like.
Everything else requires very detailed model, and maximization of something in the model, followed by carrying out of the actions in the real world, which, interestingly, is entirely optional, and which even humans have trouble getting themselves to do (when I invent something and to my satisfaction am sure that it will work, it is boring to implement, and it is a common problem). Edit: and one other point, without model all you can do is try random stuff on the world itself, which is not at all intelligent (and resembles the Wheatley in portal 2 trying to crack the code).
Can you be more specific? I have an AI that’s iterating parameters to some strange attractor—defined within it—until it finds unusual behaviour. I can make the AI that would hillclimb+search for the improvements to the former AI. edit: Now, the worst thing that can happen, it makes mind hack image that kills everyone who looks at it. That wasn’t the intent, but the ‘unusual behaviour’ might get too unusual for human brain to handle. Is that a serious risk? No it’s a laughable one.
Implicit in my setup was that the AI reached the point where it was having noticeable macroscopic effects on our world. This is obviously easiest when the AI’s substrate has some built-in capacity for input/output. If we’re being really generous, it might have an autonomous body, cameras, an internet connection, etc. If we’re being stingy, it might just be an isolated process running on a computer with its inputs limited to checking the wall-clock time and outputs limited to whatever physical effects it has on the CPU running it. In the latter case, doing something to the external world may be very difficult but not impossible.
The program you have doing local search in your example doesn’t sound like an AI; even if you stuck it in the autonomous body, it wouldn’t do anything to the world that’s not a generic side-effect of its running. No one would describe it as maximizing anything.
Well, it is maximizing what ever I defined for it to maximize, usefully for me, and in a way that is practical. In any case, you said, “All I supposed was that the AI was doing something.” . My AI is doing something.
Yea, and it’s rolling forward and clamping it’s manipulators until they wear out. Clearly you want it to maximize something in the real world, not just do something. The issue is that the only things it can do approximately this way is shooting at colour blue or the like.
Everything else requires very detailed model, and maximization of something in the model, followed by carrying out of the actions in the real world, which, interestingly, is entirely optional, and which even humans have trouble getting themselves to do (when I invent something and to my satisfaction am sure that it will work, it is boring to implement, and it is a common problem). Edit: and one other point, without model all you can do is try random stuff on the world itself, which is not at all intelligent (and resembles the Wheatley in portal 2 trying to crack the code).