The Future of Life Institute thinks that a portfolio approach to AI safety, where different groups pursue different research agendas, is best. It’s plausible to me that we’ve hit the point of diminishing returns in terms of allocating resources to MIRI’s approach, and marginal resources are best directed towards starting new research groups.
The Future of Life Institute thinks that a portfolio approach to AI safety, where different groups pursue different research agendas, is best. It’s plausible to me that we’ve hit the point of diminishing returns in terms of allocating resources to MIRI’s approach, and marginal resources are best directed towards starting new research groups.
I hadn’t known about that, but I came to the same conclusion!