Agent-based modeling seems like one obvious step forward to me for much of social-science related academic progress. OpenAI’s Hide and Seek experiment was one that I am excited about, but it is very simple and I imagine similar work could be greatly extended for other fields. The combination of simulation, possible ML distillation on simulation (to make it run much faster), and effective learning algorithms for agents, seems very powerful.
However, agent-based modeling still seems quite infrequently used within Academia. My impression is that agent-based software tools right now are quite unsophisticated and unintuitive compared to what academics would really find useful.
This feels a bit like a collective action problem. Hypothetically, better tools could cost $5-500 Million+, but it’s not obvious who would pay for them and how the funding would be structured.
I’m employed by Oxford now and it’s obvious that things aren’t well set up to hire programmers. There are strong salary caps and hiring limitations. Our group would probably have an awkward time paying out $10,000 per person to purchase strong agent-based software, even if it were worth it in total.
Humans as agents / psychology / economics. Instead of making mathematical models of rational agents, have people write code that predicts the behaviors of rational agents or humans. Test the “human bots” against empirical experimental results of humans in different situations, to demonstrate that the code accurately models human behavior.
Mechanism design. Show that according to different incentive structures, humans will behave differently, and use this to optimize the incentive structures accordingly.
Most social science. Make agent-based models to generally help explain how groups of humans interact with each other and what collective behaviors emerge.
I guess when I said, “Much of academic progress”; I should have specified, “Academic fields that deal with modeling humans to some degree”; perhaps most of social science.
I thought Probabilistic Models of Cognition was quite great (it seems criminally underappreciated); that seems like a good step in this direction.
Perhaps in the future, one could prove that “This environment with these actors will fail in these ways” by empirically showing that reinforcement agents optimizing in those setups lead to specific outcomes.
Agent-based modeling seems like one obvious step forward to me for much of social-science related academic progress. OpenAI’s Hide and Seek experiment was one that I am excited about, but it is very simple and I imagine similar work could be greatly extended for other fields. The combination of simulation, possible ML distillation on simulation (to make it run much faster), and effective learning algorithms for agents, seems very powerful.
However, agent-based modeling still seems quite infrequently used within Academia. My impression is that agent-based software tools right now are quite unsophisticated and unintuitive compared to what academics would really find useful.
This feels a bit like a collective action problem. Hypothetically, better tools could cost $5-500 Million+, but it’s not obvious who would pay for them and how the funding would be structured.
I’m employed by Oxford now and it’s obvious that things aren’t well set up to hire programmers. There are strong salary caps and hiring limitations. Our group would probably have an awkward time paying out $10,000 per person to purchase strong agent-based software, even if it were worth it in total.
Could you give a few specific examples where you imagine agent-based models would help?
Sure,
Humans as agents / psychology / economics. Instead of making mathematical models of rational agents, have people write code that predicts the behaviors of rational agents or humans. Test the “human bots” against empirical experimental results of humans in different situations, to demonstrate that the code accurately models human behavior.
Mechanism design. Show that according to different incentive structures, humans will behave differently, and use this to optimize the incentive structures accordingly.
Most social science. Make agent-based models to generally help explain how groups of humans interact with each other and what collective behaviors emerge.
I guess when I said, “Much of academic progress”; I should have specified, “Academic fields that deal with modeling humans to some degree”; perhaps most of social science.
I thought Probabilistic Models of Cognition was quite great (it seems criminally underappreciated); that seems like a good step in this direction.
Perhaps in the future, one could prove that “This environment with these actors will fail in these ways” by empirically showing that reinforcement agents optimizing in those setups lead to specific outcomes.