Just my emotions! And I had an argument about the value of artists behind the art (Can people value the source of the art? Is it likely that majority of people may value it?). Somewhat similar to Not for the Sake of Happiness (Alone). I decided to put the topic into a more global context (How long can you replace everything with AI content? What does it mean for the connection between people?). I’m very surprised that what I wrote was interesting for some people. What surprised you in my post?
I’m also interested in applying the idea of “prior knowledge” to values (or to argumentation, but not in a strictly probabilistic way). For example, maybe I don’t value (human) art that much, or very uncertain about how much I value it. But after considering some more global/fundamental questions (“prior values”, “prior questions”) I may decide that I actually value human art quite a lot in certain contexts. I’m still developing this idea.
I feel (e.g. when reading arguments why AGI “isn’t that scary”) that there’s not enough ways to describe disagreements. I hope to find a new way to show how and why people arrive at certain conclusions. In this post I tried to show “fundamental” reasons of my specific opinion (worrying about AI content generation). I also tried to do a similar thing in a post about Intelligence (I wanted to know if that type of thinking is rational or irrational).
Just my emotions! And I had an argument about the value of artists behind the art (Can people value the source of the art? Is it likely that majority of people may value it?). Somewhat similar to Not for the Sake of Happiness (Alone). I decided to put the topic into a more global context (How long can you replace everything with AI content? What does it mean for the connection between people?). I’m very surprised that what I wrote was interesting for some people. What surprised you in my post?
I’m also interested in applying the idea of “prior knowledge” to values (or to argumentation, but not in a strictly probabilistic way). For example, maybe I don’t value (human) art that much, or very uncertain about how much I value it. But after considering some more global/fundamental questions (“prior values”, “prior questions”) I may decide that I actually value human art quite a lot in certain contexts. I’m still developing this idea.
I feel (e.g. when reading arguments why AGI “isn’t that scary”) that there’s not enough ways to describe disagreements. I hope to find a new way to show how and why people arrive at certain conclusions. In this post I tried to show “fundamental” reasons of my specific opinion (worrying about AI content generation). I also tried to do a similar thing in a post about Intelligence (I wanted to know if that type of thinking is rational or irrational).