There’s a thought that’s been circulating in my mind for a while that social proof is important here. I presume that seeing a reputable person like Scott Aaronson going to work on AI safety would do a lot to convince others (researchers, funders, policymakers) that it is an important and legitimate problem.
Honestly I suspect this is going to be the single largest benefit from paying Scott to work on the problem. Similarly, when I suggested in an earlier comment that we should pay other academics in a similar manner, in my mind the largest benefit of doing so is because that will help normalize this kind of research in the wider academic community. The more respected researchers there are working on the problem, the more other researchers start thinking about it as well, resulting (hopefully) in a snowball effect. Also, researchers often bring along their grad students!
There’s a thought that’s been circulating in my mind for a while that social proof is important here. I presume that seeing a reputable person like Scott Aaronson going to work on AI safety would do a lot to convince others (researchers, funders, policymakers) that it is an important and legitimate problem.
Honestly I suspect this is going to be the single largest benefit from paying Scott to work on the problem. Similarly, when I suggested in an earlier comment that we should pay other academics in a similar manner, in my mind the largest benefit of doing so is because that will help normalize this kind of research in the wider academic community. The more respected researchers there are working on the problem, the more other researchers start thinking about it as well, resulting (hopefully) in a snowball effect. Also, researchers often bring along their grad students!
Right, I was going to bring up the snowball effect as well but I forgot. I think that’s a huge point.