Stay in touch with the broader ML community (e.g., by following them on Twitter, attending AI events)
I got a lot of value out of attending ICML and would probably recommend attending an ML conference to anyone who has the resources. You actually get to talk to authors about their field and research process, which gets a lot more than reading papers or reading Twitter.
Anyway, I think you missed one of the best ideas: actually trying to understand the arguments yourself and only using correct ones. An argument isn’t correct just because it’s “grounded” or “contemporary” although it is good to have supporting evidence. The steps all have to be locally valid and you have to make valid assumptions. Sometimes an argument needs to be slightly changed from the most common version to be valid [1], but this only makes it more important.
Community builders often don’t do technical research themselves so my guess is it’s easy to underinvest, but sometimes the required steps are as simple as listing out an argument in great detail and looking at it skeptically, or checking with someone who knows ML about whether a current ML system has some property that we assume and if we expect it to arise anytime soon.
[1]: two examples: making various arguments compatible with Reward is not the optimization target, and making coherence arguments work even though AI systems will not necessarily have a single fixed utility function
I got a lot of value out of attending ICML and would probably recommend attending an ML conference to anyone who has the resources. You actually get to talk to authors about their field and research process, which gets a lot more than reading papers or reading Twitter.
Anyway, I think you missed one of the best ideas: actually trying to understand the arguments yourself and only using correct ones. An argument isn’t correct just because it’s “grounded” or “contemporary” although it is good to have supporting evidence. The steps all have to be locally valid and you have to make valid assumptions. Sometimes an argument needs to be slightly changed from the most common version to be valid [1], but this only makes it more important.
Community builders often don’t do technical research themselves so my guess is it’s easy to underinvest, but sometimes the required steps are as simple as listing out an argument in great detail and looking at it skeptically, or checking with someone who knows ML about whether a current ML system has some property that we assume and if we expect it to arise anytime soon.
[1]: two examples: making various arguments compatible with Reward is not the optimization target, and making coherence arguments work even though AI systems will not necessarily have a single fixed utility function
Your last sentence in the first paragraph seems to be cut off at “gets a lot more than”!
Great point, I’ve added this suggestion to the post.