I have a lot of trouble justifying to myself reading through more than the first five paragraphs. Below is my commentary on what I’ve read.
I doubt that the short term impacts of sub-human level AI, be it generating prose, photographs, or films, on our epistemology are negative enough to justify them being weighted high as the X-Risk that is likely to emerge upon creating human level AI.
We have been living in a adversarial information space for as long as we have had human civilization. Some of the most most impressive changes to our epistemology were made prior to photographs (empiricism, rationalism, etc), some of them were made after (they are cool!). It will require modifying how we judge the accuracy of a given claim when we can no longer trust photos/videos in low stakes situations (we’ve not been able to trust them in high stakes situations for as long as they have existed; see special effects, film making, conspiracies about any number of recorded claims, etc), but that is just a normal fact of human societal evolution.
If you want to convince (me, at least) that this is a threat that is potentially society ending, I would love an argument/hook that addresses my above claims.
Well, thanks for reading five of them :) I’ll try to answer your concerns:
Film:
Film is a good argument, but mostly shows that we can handle “fakes” when framed right, e. g. when they are presented with context clues marking them as a movie. Generated images will not only often lack that framing, but will be presented to us framed as if they were representing something real. I would argue that this will devalue the context clues and makes it difficult/impossible to tell which images are real and which are generated in general.
X-risk compared to human level AI:
a) Political/societal destabilization while nukes are a thing = bad. Or more general: This interferes with our ability to deal with existing X-risks (including our ability to deal with the emergence of AGI).
b) We’d need to define X-risk a bit here. If we accept really bad societal outcomes (e. g. collapse of democracy followed by something decidedly bad), then my job convincing you should be relatively easy. The confusion this will cause should systematically benefit the fringes and actors following a “tear it down”-strategy. And I don’t think we are doing great in the stability department right now anyways.
We did epistemological impressive things before photography:
True, but not having a tool is different from loosing a tool you relied on for a long time. It’s also different from that tool suddenly doing something entirely different while still appearing to do the same thing on the surface.
I have a lot of trouble justifying to myself reading through more than the first five paragraphs. Below is my commentary on what I’ve read.
I doubt that the short term impacts of sub-human level AI, be it generating prose, photographs, or films, on our epistemology are negative enough to justify them being weighted high as the X-Risk that is likely to emerge upon creating human level AI.
We have been living in a adversarial information space for as long as we have had human civilization. Some of the most most impressive changes to our epistemology were made prior to photographs (empiricism, rationalism, etc), some of them were made after (they are cool!). It will require modifying how we judge the accuracy of a given claim when we can no longer trust photos/videos in low stakes situations (we’ve not been able to trust them in high stakes situations for as long as they have existed; see special effects, film making, conspiracies about any number of recorded claims, etc), but that is just a normal fact of human societal evolution.
If you want to convince (me, at least) that this is a threat that is potentially society ending, I would love an argument/hook that addresses my above claims.
Well, thanks for reading five of them :) I’ll try to answer your concerns:
Film:
Film is a good argument, but mostly shows that we can handle “fakes” when framed right, e. g. when they are presented with context clues marking them as a movie. Generated images will not only often lack that framing, but will be presented to us framed as if they were representing something real. I would argue that this will devalue the context clues and makes it difficult/impossible to tell which images are real and which are generated in general.
X-risk compared to human level AI:
a) Political/societal destabilization while nukes are a thing = bad. Or more general: This interferes with our ability to deal with existing X-risks (including our ability to deal with the emergence of AGI).
b) We’d need to define X-risk a bit here. If we accept really bad societal outcomes (e. g. collapse of democracy followed by something decidedly bad), then my job convincing you should be relatively easy. The confusion this will cause should systematically benefit the fringes and actors following a “tear it down”-strategy. And I don’t think we are doing great in the stability department right now anyways.
We did epistemological impressive things before photography:
True, but not having a tool is different from loosing a tool you relied on for a long time. It’s also different from that tool suddenly doing something entirely different while still appearing to do the same thing on the surface.