follow up question in my mind, is it okay for a game playing agent to look at someone else’s work and learn from it? we are guessing at the long-term outcomes of the legal system here, so I would also like to answer what the legal system should output, not merely what it is likely to. should game playing agents be more like humans than like supervised agents? My sense is that they should because reinforcement learners trained from scratch in an environment have an overwhelming amount of their own knowledge and only a small blip of their training data is the moment where they encounter another agent’s art.
Competetive multiplayer games already have a situation where things are “discovered” and that you have to literally limit the flow of information if you want to control what others do with the information. I guess the modifier that often money flows ared not involved might make it so that it has not been scrutinised that much. “History of strats” is already a youtube genre.
It is kinda sad that for many games now you will “look up how it is supposed to be played”ie you first “learn the meta” and then on your merry way forward.
I guess for computer agents it could be practical for the agents to have amnesia about the actual games that they play. But for humans any that kidn of information is going to be shared when it is applied in the game. And there is the issue of proving that you didn’t cheat by providing a plausible method.
no, I mean, if the game playing agent is highly general, and is the type to create art as a subquest/communication like we are—say, because of playing a cooperative game—how would an ideal legal system respond differently to that vs to a probabilistic model of existing art with no other personally-generated experiences?
follow up question in my mind, is it okay for a game playing agent to look at someone else’s work and learn from it? we are guessing at the long-term outcomes of the legal system here, so I would also like to answer what the legal system should output, not merely what it is likely to. should game playing agents be more like humans than like supervised agents? My sense is that they should because reinforcement learners trained from scratch in an environment have an overwhelming amount of their own knowledge and only a small blip of their training data is the moment where they encounter another agent’s art.
Competetive multiplayer games already have a situation where things are “discovered” and that you have to literally limit the flow of information if you want to control what others do with the information. I guess the modifier that often money flows ared not involved might make it so that it has not been scrutinised that much. “History of strats” is already a youtube genre.
It is kinda sad that for many games now you will “look up how it is supposed to be played”ie you first “learn the meta” and then on your merry way forward.
I guess for computer agents it could be practical for the agents to have amnesia about the actual games that they play. But for humans any that kidn of information is going to be shared when it is applied in the game. And there is the issue of proving that you didn’t cheat by providing a plausible method.
no, I mean, if the game playing agent is highly general, and is the type to create art as a subquest/communication like we are—say, because of playing a cooperative game—how would an ideal legal system respond differently to that vs to a probabilistic model of existing art with no other personally-generated experiences?