Why would humans be making these decisions? Why are we assuming that the AI can design vaccines, but not do this sort of reasoning to select how to benefit people by itself?
I don’t think it’s very hard to imagine AI of the sort that is able to superhumanly design vaccines but not govern economies.
I would avoid giving heuristics like that much weight. I would say to do QALY calculations, at least to the order of magnitude. The QALY between different possible projects can differ by orders of magnitude. Which projects are on the table depends on how good the tech is and what’s already been done. This is an optimisation that we can better make when you have the list of proposed beneficial AI projects in hand.
As I explained in a previous comment (referencing here for other readers), there are some procedural reasons I don’t want to do pure EV maximization at the object level once the “pot” of benefits grows big enough to attract certain types of attention.
I don’t think it’s very hard to imagine AI of the sort that is able to superhumanly design vaccines but not govern economies.
As I explained in a previous comment (referencing here for other readers), there are some procedural reasons I don’t want to do pure EV maximization at the object level once the “pot” of benefits grows big enough to attract certain types of attention.