Re: A utility function measured in dollars vs hyperinflation
Wealth then. Wealth measures access to resources—so convert to gold, silver, barrels of oil, etc to measure it—if you don’t trust your country’s currency.
Unlike human happiness, wealth is easy to measure, and difficult to counterfeit.
One of the more obvious roadmaps to creating AI involves the stock market waking up. The driving problem is how to allocate the world’s financial resources. The marketplace forms a network of intelligent agents, some human, some machine, trying to solve the problem, by deciding where to invest their stocks. Over time, the machine portion increases—and eventually a huge and collectively very smart AI distributed across the many countries of the world is solving most of the problem.
Kurzweil Applied Intelligence—and others—are already working on this. The scenario seems realistic to me—partly because it is clear how the AI development is going to be funded. I do not really see how the corresponding “happiness-maximising” AI action plan is going to be financed. Consequently, it’s not clear whether it would be worth discussing—even if happiness was something real that could be measured, and was difficult to fake.
Re: A utility function measured in dollars vs hyperinflation
Wealth then. Wealth measures access to resources—so convert to gold, silver, barrels of oil, etc to measure it—if you don’t trust your country’s currency.
Unlike human happiness, wealth is easy to measure, and difficult to counterfeit.
One of the more obvious roadmaps to creating AI involves the stock market waking up. The driving problem is how to allocate the world’s financial resources. The marketplace forms a network of intelligent agents, some human, some machine, trying to solve the problem, by deciding where to invest their stocks. Over time, the machine portion increases—and eventually a huge and collectively very smart AI distributed across the many countries of the world is solving most of the problem.
Kurzweil Applied Intelligence—and others—are already working on this. The scenario seems realistic to me—partly because it is clear how the AI development is going to be funded. I do not really see how the corresponding “happiness-maximising” AI action plan is going to be financed. Consequently, it’s not clear whether it would be worth discussing—even if happiness was something real that could be measured, and was difficult to fake.