“I don’t think the policy of “I will fund people to do work that I don’t expect to be useful” is a good one, unless there is some positive externality.”
By this, do you mean you think it’s not good to fund work that you expect to be useful with < 50% probability, even if the downside risk is zero?
Or do you mean you don’t expect it’s useful to fund work you strongly expect to have no positive value when you also expect it to have a significant risk of causing harm?
50% is definitely not my cutoff, and I don’t have any probability cutoff. More something in the expected value space. Like, if you have an idea that could be really great but only has a 1% chance of working, that still feels definitely worth funding. But if you have an idea that seems like it only improves things a bit, and has a 10% chance of working, that doesn’t feel worth it.
“I don’t think the policy of “I will fund people to do work that I don’t expect to be useful” is a good one, unless there is some positive externality.”
By this, do you mean you think it’s not good to fund work that you expect to be useful with < 50% probability, even if the downside risk is zero?
Or do you mean you don’t expect it’s useful to fund work you strongly expect to have no positive value when you also expect it to have a significant risk of causing harm?
50% is definitely not my cutoff, and I don’t have any probability cutoff. More something in the expected value space. Like, if you have an idea that could be really great but only has a 1% chance of working, that still feels definitely worth funding. But if you have an idea that seems like it only improves things a bit, and has a 10% chance of working, that doesn’t feel worth it.