The Edge home page featured an online editorial that downplayed AI art because it just combines images that already exist. If you look closely enough, human artwork is also combinations of things that already existed.
One example is Blackballed Totem Drawing: Roger ‘The Rajah’ Brown. James Pate drew this charcoal drawing in 2016. It was the Individual Artist Winner of the Governor’s Award for the Arts. At the microscopic scale, this artwork is microscopic black particles embedded in a large sheet of paper. I doubt he made the paper he drew on, and the black particles were part of Pate’s drawing utensils.
Zooming out, one can see depictions of Roger Brown shooting a basketball, like one would see on TV. This collage-style artwork, as the description beside says, depicts Roger Brown’s story.
These are the pieces that came together to form an award-winning piece of art. Even though most human art is combinations of things that already exist, I still value it. I am also amazed at AI’s image creation capabilities.
The hedonic treadmill is when permanent changes to living conditions lead to only temporary increases in happiness. This keeps us always wanting improvements to our lives. We often spend money on the newest Iphones and focus our attention on improving our external circumstances. We ignore the quote:
“What lies before us and what lies behind us are tiny matters compared to what lies within us”
Some people eat chips to quell their boredom. The hedonic treadmill ensures that, despite improvements in income, people are not satisfied. I was surprised by how much the hedonic treadmill dovetails with profit maximization. If they maximized profit, I suspect companies would pay Big Pharma billions not to release drugs that improve the hedonic set point. The antidepressant drugs Big Pharma releases act as mood flatteners, according to https://www.hedweb.com/.
Say we have a powerful superintelligent utility maximizer. They will turn the world into the precise configuration that maximizes their expected utility. No human has any say in what will happen.[1]
We do not want our lives optimized for us. We want autonomy, which expected utility maximizers would not give. Nobody has found an outer aligned utility function because powerful expected utility maximizers leave us no room to optimize. Autonomy is one value necessary for futures that we value. Walden One is a dystopia where everyone is secretly manipulated but live happy social lives.
Another reason we hate powerful optimization is status quo bias. Our world is extremely complex, and almost utility functions have maximums far from the current world. This is another reason expected utility maximizers create futures we hate.
We should instead focus on tools that give us an epistesmic advantage and help us choose the world we want to live in. This could involve oracle AI, CEV, training to reduce cognitive biases, etc. This is why I think we should focus on helping people become more rational or approximating effective altruists instead of focusing on inner aligning agent AI.
Unless the utility function includes a brain emulation in a position to sculpt the world by choosing the AI’s utility function. I do not expect this to happen in practice.
The Edge home page featured an online editorial that downplayed AI art because it just combines images that already exist. If you look closely enough, human artwork is also combinations of things that already existed.
One example is Blackballed Totem Drawing: Roger ‘The Rajah’ Brown. James Pate drew this charcoal drawing in 2016. It was the Individual Artist Winner of the Governor’s Award for the Arts. At the microscopic scale, this artwork is microscopic black particles embedded in a large sheet of paper. I doubt he made the paper he drew on, and the black particles were part of Pate’s drawing utensils.
Zooming out, one can see depictions of Roger Brown shooting a basketball, like one would see on TV. This collage-style artwork, as the description beside says, depicts Roger Brown’s story.
These are the pieces that came together to form an award-winning piece of art. Even though most human art is combinations of things that already exist, I still value it. I am also amazed at AI’s image creation capabilities.
Hedonic Treadmill and the Economy
The hedonic treadmill is when permanent changes to living conditions lead to only temporary increases in happiness. This keeps us always wanting improvements to our lives. We often spend money on the newest Iphones and focus our attention on improving our external circumstances. We ignore the quote:
“What lies before us and what lies behind us are tiny matters compared to what lies within us”
Some people eat chips to quell their boredom. The hedonic treadmill ensures that, despite improvements in income, people are not satisfied. I was surprised by how much the hedonic treadmill dovetails with profit maximization. If they maximized profit, I suspect companies would pay Big Pharma billions not to release drugs that improve the hedonic set point. The antidepressant drugs Big Pharma releases act as mood flatteners, according to https://www.hedweb.com/.
Why FAI will not be an expected utility maximizer
Say we have a powerful superintelligent utility maximizer. They will turn the world into the precise configuration that maximizes their expected utility. No human has any say in what will happen.[1]
We do not want our lives optimized for us. We want autonomy, which expected utility maximizers would not give. Nobody has found an outer aligned utility function because powerful expected utility maximizers leave us no room to optimize. Autonomy is one value necessary for futures that we value. Walden One is a dystopia where everyone is secretly manipulated but live happy social lives.
Another reason we hate powerful optimization is status quo bias. Our world is extremely complex, and almost utility functions have maximums far from the current world. This is another reason expected utility maximizers create futures we hate.
We should instead focus on tools that give us an epistesmic advantage and help us choose the world we want to live in. This could involve oracle AI, CEV, training to reduce cognitive biases, etc. This is why I think we should focus on helping people become more rational or approximating effective altruists instead of focusing on inner aligning agent AI.
^
Unless the utility function includes a brain emulation in a position to sculpt the world by choosing the AI’s utility function. I do not expect this to happen in practice.