“When the limiting resource is money it’s quite clear that we should prioritize the uses where it goes the farthest. If there are three organizations that can distribute antimalarial nets for $5/each, $50/each, and $500/each we should just give to the first one.”
I strongly disagree.
The organization that can provide for $500 each the broad seeding of ethical consideration to those getting the anti-malaria nets is far more important than just giving nets for $5 to someone who will live to become corrupt and destroy the local ecology.
Happiness requires 2 things: reasonably good health, and a broad sense of being grateful. Not grateful to something, but just feeling grateful. That’s it.
People working rice fields are most often happy, but they have nothing material. They often have hardships and more frequently die, but while living they are most often happy (first hand experience).
But in helping people who seem in-need, broad ethical concern must also include ecological impacts. Promoting health in a region where the ecological systems are sensitive to human expansion, is counter productive. More people survive to have more children, more people in need, putting pressure on the local ecology, destroying bio-diversity...
Helping people who have no broad ethical concern is wrong; this promotes corruption (illegal allocation).
A general consequence of helping humans without ethical conscience, is future humans will not experience the stabilization provided by broad ecological diversity as pathogens have plague-like consequences because natural microbes won’t be around to keep “evolving” pathogens in check to be consistent with a stabilized ecology.
This is why the melting of the methane permafrost is such a major concern. The balance of microbes globally will change and the plants and animals will be affected as a result; i.e. potentials for mass/global extinctions.
The Point: Every proposed prioritization of resources should include a negative feedback control system to ensure slow growth reflective of broad benefits and limiting negative consequences. Slow because the beneficial synergistic systems take time to form, while corrupt systems respond more quickly (requires little thought and planning). The result of a lack of negative feedback is a spike of resource utilization that depletes the broad resources available and broad damage results; i.e. lack of sustainability.
Ethics is a measure of organization, and as such provides predictability. Happiness of others can then be supported by broad planning of the future, seeding beneficial opportunities; not simplistically acting in the present.
So the $500/each might be the far less costly solution overall.
One basic problem with the AGI Risk Survey is that humans consistently represent AI as a collective function with recognizable features of human cognition. The problem with modeling AGI based upon human cognitive features is that our body and environment stabilize the evolving network systems. Our hormones, limb nervous systems, brain lobes, nutritional sources & utilization, environmental interactions, socialization, breeding characteristics (genetic memory)… and thousands of other recursive influencers...provide a constantly evolving system (humans) that we fail to recognize to have evolved over millions of years (exa-state numbers of systemic step events).
AI is intended to evolve over much shorter time spans and in a contained interactive environment that will not likely have the consistent ecological and socialization/genetic selection consequences. Therefore, we can not expect AIG to recognizably be human. The thought processes will quickly evolve to optimize itself within its environment and life entity features.
What will be common to ALL cognitive life? This is tough since humans are the only current reference.
If humans could have lived forever, would we have evolved socially and intellectually? For what reason.
For AIG, there must be a complex system of interactive systems that WILL guide and stimulate its evolution, or the AIG has no potential to evolve, AND evolve synergistically with humanity (in ways we can understand, or even recognize). But given the non-biological foundations for AI, those systems are not related to human evolution.
An innate part of being human is empathy. Is this what allows us to be human and to survive? Can we hardwire empathy into AI so that it evolves consistent with its environment AND develop human-like features that our simple minds can recognize? Regardless of the foundations of AI construction.
Empathy is used both constructively and destructively by humans; recognized.
Just because humans are limited in the ability to consider broad implications, does not mean AI will be as limited; nor as gifted (various forms of AI are intended to have restrictions in their intellectual capabilities; soldiers, cognitive functions for specific purposes (mining equipment...) …).
Regulating AI development is not practical, we don’t even regulate our own politicians (treason related to illegal allocations—intentionally weakening national security to illegally allocate national resources). Before we can regulate AI, monitoring must be established. Google Search: eliminate all corruption
So unless we develop broad universal monitoring (universities building and managing the NSA for example), these discussions are pointless because researchers will develop whatever catches their whim. To include: “Let’s see what happens when it sees an internet port?”