Why would humans be making these decisions? Why are we assuming that the AI can design vaccines, but not do this sort of reasoning to select how to benefit people by itself?
Even though such Benefits might not be truly equal or universal, they approximate these values much better than national approaches and are probably more effective given the high fixed costs of attempting national distributions. However, if an organization generates truly large amounts of Benefits, a national per capita strategy seems more appealing.[6]
I would avoid giving heuristics like that much weight. I would say to do QALY calculations, at least to the order of magnitude. The QALY between different possible projects can differ by orders of magnitude. Which projects are on the table depends on how good the tech is and what’s already been done. This is an optimisation that we can better make when you have the list of proposed beneficial AI projects in hand.
Why would humans be making these decisions? Why are we assuming that the AI can design vaccines, but not do this sort of reasoning to select how to benefit people by itself?
I don’t think it’s very hard to imagine AI of the sort that is able to superhumanly design vaccines but not govern economies.
I would avoid giving heuristics like that much weight. I would say to do QALY calculations, at least to the order of magnitude. The QALY between different possible projects can differ by orders of magnitude. Which projects are on the table depends on how good the tech is and what’s already been done. This is an optimisation that we can better make when you have the list of proposed beneficial AI projects in hand.
As I explained in a previous comment (referencing here for other readers), there are some procedural reasons I don’t want to do pure EV maximization at the object level once the “pot” of benefits grows big enough to attract certain types of attention.
Why would humans be making these decisions? Why are we assuming that the AI can design vaccines, but not do this sort of reasoning to select how to benefit people by itself?
I would avoid giving heuristics like that much weight. I would say to do QALY calculations, at least to the order of magnitude. The QALY between different possible projects can differ by orders of magnitude. Which projects are on the table depends on how good the tech is and what’s already been done. This is an optimisation that we can better make when you have the list of proposed beneficial AI projects in hand.
I don’t think it’s very hard to imagine AI of the sort that is able to superhumanly design vaccines but not govern economies.
As I explained in a previous comment (referencing here for other readers), there are some procedural reasons I don’t want to do pure EV maximization at the object level once the “pot” of benefits grows big enough to attract certain types of attention.