We find CSA’s in our actual world because it is possible for them to evolve to that point (see humans for proof). Anything more advanced (A friendly AI, for example, that doesn’t act like a human) takes actual planning, and as we see, that’s a difficult task, whose first stage requires a CSA or equivalent strength of intelligence. Humans, you see, don’t experience the halting problem for this main reason: They Don’t Halt. Most “coulds, shoulds, woulds”, happen in the animal’s spare time. A human has a long, long list of default actions which didn’t require planning. If a situation arises which is truly unique, a human has no particular attachment to that situation, having never experienced it before, and can think abstractly about it. Anything else is in some way linked to a default behavior (which “feels right”), enabling something to feel possible. What makes CSA’s so handy to have is that even if they don’t know exactly WHAT to do, they still DO something. Solve the world’s problems, when you are not busy eating, sleeping, attempting to mate. A Friendly AI is not necessarily a Satisfied AI. Perhaps the trick is to give the AGI the ability to make paperclips every once in a while. General includes paperclips, correct? Otherwise one is only trying to make a VERY advanced thermostat.
We find CSA’s in our actual world because it is possible for them to evolve to that point (see humans for proof). Anything more advanced (A friendly AI, for example, that doesn’t act like a human) takes actual planning, and as we see, that’s a difficult task, whose first stage requires a CSA or equivalent strength of intelligence. Humans, you see, don’t experience the halting problem for this main reason: They Don’t Halt. Most “coulds, shoulds, woulds”, happen in the animal’s spare time. A human has a long, long list of default actions which didn’t require planning. If a situation arises which is truly unique, a human has no particular attachment to that situation, having never experienced it before, and can think abstractly about it. Anything else is in some way linked to a default behavior (which “feels right”), enabling something to feel possible. What makes CSA’s so handy to have is that even if they don’t know exactly WHAT to do, they still DO something. Solve the world’s problems, when you are not busy eating, sleeping, attempting to mate. A Friendly AI is not necessarily a Satisfied AI. Perhaps the trick is to give the AGI the ability to make paperclips every once in a while. General includes paperclips, correct? Otherwise one is only trying to make a VERY advanced thermostat.