[Question] What problem would you like to see Reinforcement Learning applied to?

We often talk about the dangers and challenges of AI and self-improving agents, but I’m curious what you view as potential beneficial applications of AI—if any! As a ML researcher I encounter a lot of positivity and hype in the field, so the very different perspective of the rationality community would be very interesting.

Specifically I’d like to focus on reinforcement learning, because this most closely matches the concept of an AI that has agency and makes decisions. A reinforcement learning agent is usually defined as a program that interacts with an environment, maximising the sum of rewards it receives.

The environment represents the problem to be solved, and the rewards are a measure of how good the solution is. For some problems—a board game, 3-SAT—assessing a solution (giving a reward) is easy, for others computing a reward may be as difficult as solving the problem in the first place. Those are likely not good candidates to be solved with RL ;)

To facilitate discussion, I would suggest one top level reply per problem, specifying:

  • a short description of the problem

  • the action space—how does the agent interact with the problem /​ environment

  • the reward—how do we know the agent has done well?

Disclaimer: I work on RL, so if you make suggestions that are feasible and would have substantial positive impact on the world, I may pursue them.