The first algorithm I would use is this, to solve problems of mimicking a function with provided inputs and outputs:
For all possible programs of length less than X, run that program on the inputs for time Y. Then measure how close it comes to the outputs. The closest program is then your model.
This takes time O(Y*2^X) so it’s impractical in the world we live in, but in this hypothetical world it would work pretty well. This only solves the “classification” or “modeling” type of machine learning problems, rather than reinforcement learning per se, but that seems pretty good to start.
For reinforcement learning, it just depends what you’d want to do in general. I would not just build a general AI and give it access to the internet, any more than I would bring an army of teenagers over to my house and give them access to my car and wallet. If you really had a super-powerful AI then I think the best way of increasing its practical capabilities over time while controlling it would be like any other technology—start a tech company, raise money, think of a business model, and just see what happens. That strategy seems way more likely that you could retain control over the technology and continue to express your own moral judgment over time. Compare to, for example, the scientists developing nuclear weapons, who quickly lost control to politicians. Maybe you could build a new search engine—that seems like it could be a lot better with real AI behind it.
The first algorithm I would use is this, to solve problems of mimicking a function with provided inputs and outputs:
For all possible programs of length less than X, run that program on the inputs for time Y. Then measure how close it comes to the outputs. The closest program is then your model.
This takes time O(Y*2^X) so it’s impractical in the world we live in, but in this hypothetical world it would work pretty well. This only solves the “classification” or “modeling” type of machine learning problems, rather than reinforcement learning per se, but that seems pretty good to start.
For reinforcement learning, it just depends what you’d want to do in general. I would not just build a general AI and give it access to the internet, any more than I would bring an army of teenagers over to my house and give them access to my car and wallet. If you really had a super-powerful AI then I think the best way of increasing its practical capabilities over time while controlling it would be like any other technology—start a tech company, raise money, think of a business model, and just see what happens. That strategy seems way more likely that you could retain control over the technology and continue to express your own moral judgment over time. Compare to, for example, the scientists developing nuclear weapons, who quickly lost control to politicians. Maybe you could build a new search engine—that seems like it could be a lot better with real AI behind it.