In principle, you could enumerate every possible scenario your AI system could encounter and specify what the best action is in that situation. In practice, this either requires an impossible amount of computer memory, an impossible knowledge of what the system is likely to encounter, an impossible amount of work to actually create and test the mapping of input states to output actions, or more commonly a combination of these.
Even if it could be done—in what sense could the result be meaningfully called “AI”? A modern computer behaves perfectly deterministically, always performing the same action under the same conditions, but they are tools, not intelligences, and they can’t learn on their own or generalize to new inputs. An AI with an ability to understand natural language will eventually be able to learn and use words it hasn’t heard before, but my computer will never “know” what to do if I remove the “enter” button from the keyboard and plug in a toaster.
I’m far from well-read on these topics myself so I’m likely misunderstanding the question or poorly answering it. I recommend looking at some of the curated sequences on AI safety on LessWrong (click the “Library” header in the sidebar menu and scroll for relevant titles). It’s very possible your questions are addressed there.
Does it have to be deterministic though? Can a program be open-ended to the effect that process is optimized and outcome is undetermined? (Perhaps navigating the world like that is “intelligence” without the “artificial.”) I think AI is capable of learning on its own though, or at least programming other algorithms without human input. And one of the issues there is that once it learns language, as you point out, it will be able to do things we can’t really fathom right now, I think.
In principle, you could enumerate every possible scenario your AI system could encounter and specify what the best action is in that situation. In practice, this either requires an impossible amount of computer memory, an impossible knowledge of what the system is likely to encounter, an impossible amount of work to actually create and test the mapping of input states to output actions, or more commonly a combination of these.
Even if it could be done—in what sense could the result be meaningfully called “AI”? A modern computer behaves perfectly deterministically, always performing the same action under the same conditions, but they are tools, not intelligences, and they can’t learn on their own or generalize to new inputs. An AI with an ability to understand natural language will eventually be able to learn and use words it hasn’t heard before, but my computer will never “know” what to do if I remove the “enter” button from the keyboard and plug in a toaster.
I’m far from well-read on these topics myself so I’m likely misunderstanding the question or poorly answering it. I recommend looking at some of the curated sequences on AI safety on LessWrong (click the “Library” header in the sidebar menu and scroll for relevant titles). It’s very possible your questions are addressed there.
Does it have to be deterministic though? Can a program be open-ended to the effect that process is optimized and outcome is undetermined? (Perhaps navigating the world like that is “intelligence” without the “artificial.”) I think AI is capable of learning on its own though, or at least programming other algorithms without human input. And one of the issues there is that once it learns language, as you point out, it will be able to do things we can’t really fathom right now, I think.
Thanks for the sequence rec. I’ll check it out!