Four meetings in, people have met, shared updates on study projects, shared updates on Kaggle competitions, talked about making study groups and Kaggle teams. People are more informed about and seem more interested in AI (safety) progress. I’m not that emotionally committed to its continuation but it seems like enough high-potential people with a shared interest are meeting that good things will eventually emerge on an AI safety front.
Four meetings in, people have met, shared updates on study projects, shared updates on Kaggle competitions, talked about making study groups and Kaggle teams. People are more informed about and seem more interested in AI (safety) progress. I’m not that emotionally committed to its continuation but it seems like enough high-potential people with a shared interest are meeting that good things will eventually emerge on an AI safety front.
Is the end-game to do data-analysis on data for charity evaluation, intervention evaluation, cost-effectiveness and that kind of thing?
Or, to inform people interested in machine learning about AI safety?
It’s deliberately not just for AI safety, but half of the people are interested in AI safety currently.
As well as promoting interest in these two areas among people with AI knowledge, the aim is to promote knowledge in people who care.