Oh, maybe, but it seemed like the specific proposal here was the scale. It seemed like hiring a few people to do AI research was… sort of just the same as all the hiring and independent research that’s already been happening. The novel suggestion here is hiring less aligned people at a scale that would demonstrate to the rest of the world the problem was really hard. (not sure exactly what the OP meant, and not sure what specific new element you were was interested in)
it seemed like the specific proposal here was the scale.
I guess a question here is what the returns-to-scale curve looks like? I’d be surprised if the 501-1000th researchers were more valuable than the 1-500th, suggesting there is a smaller version that’s still worth doing.
I don’t know where this guess comes from, but my guess might be that the curve is increasing up to somewhere between 10 and 100 researchers, and decreasing after that. But also there’s likely to be threshold effects at round/newsworthy numbers?
I’m not going to speculate about who ment what. But for me the new and interesing idea whas to pay people to do reserch in order to change their mind, as oposed to pay people to do reaserch in order to produce reserch.
As far as I know all the current hireing and paying independent reserchers is directed towards people who already do belive that AI Safety reserch is dificult and important. Paying people who are not yet convinced is a new move (as far as I know) even at small scale.
I guess that is is currently possible for an AI risk sceptics to get an AI Safety reserch grant. But none of the grant are designed for this purpouse, right? I think the format of a very high payed (more thatn they would earn otherwese) very short (not a major interuption to ongoing research) offer, with the possibility of a price at the end, is more optimised to get skeptics onboard.
In short, the design of a funding program will be very diffrent when you have a difrent goal in mind.
Oh, maybe, but it seemed like the specific proposal here was the scale. It seemed like hiring a few people to do AI research was… sort of just the same as all the hiring and independent research that’s already been happening. The novel suggestion here is hiring less aligned people at a scale that would demonstrate to the rest of the world the problem was really hard. (not sure exactly what the OP meant, and not sure what specific new element you were was interested in)
I guess a question here is what the returns-to-scale curve looks like? I’d be surprised if the 501-1000th researchers were more valuable than the 1-500th, suggesting there is a smaller version that’s still worth doing.
I don’t know where this guess comes from, but my guess might be that the curve is increasing up to somewhere between 10 and 100 researchers, and decreasing after that. But also there’s likely to be threshold effects at round/newsworthy numbers?
That does sound right-ish.
I’m not going to speculate about who ment what. But for me the new and interesing idea whas to pay people to do reserch in order to change their mind, as oposed to pay people to do reaserch in order to produce reserch.
As far as I know all the current hireing and paying independent reserchers is directed towards people who already do belive that AI Safety reserch is dificult and important. Paying people who are not yet convinced is a new move (as far as I know) even at small scale.
I guess that is is currently possible for an AI risk sceptics to get an AI Safety reserch grant. But none of the grant are designed for this purpouse, right? I think the format of a very high payed (more thatn they would earn otherwese) very short (not a major interuption to ongoing research) offer, with the possibility of a price at the end, is more optimised to get skeptics onboard.
In short, the design of a funding program will be very diffrent when you have a difrent goal in mind.