Well, thanks for reading five of them :) I’ll try to answer your concerns:
Film:
Film is a good argument, but mostly shows that we can handle “fakes” when framed right, e. g. when they are presented with context clues marking them as a movie. Generated images will not only often lack that framing, but will be presented to us framed as if they were representing something real. I would argue that this will devalue the context clues and makes it difficult/impossible to tell which images are real and which are generated in general.
X-risk compared to human level AI:
a) Political/societal destabilization while nukes are a thing = bad. Or more general: This interferes with our ability to deal with existing X-risks (including our ability to deal with the emergence of AGI).
b) We’d need to define X-risk a bit here. If we accept really bad societal outcomes (e. g. collapse of democracy followed by something decidedly bad), then my job convincing you should be relatively easy. The confusion this will cause should systematically benefit the fringes and actors following a “tear it down”-strategy. And I don’t think we are doing great in the stability department right now anyways.
We did epistemological impressive things before photography:
True, but not having a tool is different from loosing a tool you relied on for a long time. It’s also different from that tool suddenly doing something entirely different while still appearing to do the same thing on the surface.
Well, thanks for reading five of them :) I’ll try to answer your concerns:
Film:
Film is a good argument, but mostly shows that we can handle “fakes” when framed right, e. g. when they are presented with context clues marking them as a movie. Generated images will not only often lack that framing, but will be presented to us framed as if they were representing something real. I would argue that this will devalue the context clues and makes it difficult/impossible to tell which images are real and which are generated in general.
X-risk compared to human level AI:
a) Political/societal destabilization while nukes are a thing = bad. Or more general: This interferes with our ability to deal with existing X-risks (including our ability to deal with the emergence of AGI).
b) We’d need to define X-risk a bit here. If we accept really bad societal outcomes (e. g. collapse of democracy followed by something decidedly bad), then my job convincing you should be relatively easy. The confusion this will cause should systematically benefit the fringes and actors following a “tear it down”-strategy. And I don’t think we are doing great in the stability department right now anyways.
We did epistemological impressive things before photography:
True, but not having a tool is different from loosing a tool you relied on for a long time. It’s also different from that tool suddenly doing something entirely different while still appearing to do the same thing on the surface.