Any image produced by DALL-E which could also convey or be used to convey misalignment or other risks from AI would be very useful because it could combine the desired messages: “the AI problem is urgent,” and “misalignment is possible and dangerous.”
For example, if DALL-E responded to the prompt: “AI living with humans” by creating an image suggesting a hierarchy of AI over humans, it would serve both messages.
However, this is only worthy of a side note, because creating such suggested misalignment organically might be very difficult.
Other image prompts might be: “The world as AI sees it,” “the power of intelligence,” “recursive self-improvement,” “the danger of creating life,” “god from the machine,” etc.
There’s a lot of good DALL-E images floating around lesswrong that point towards alignment significance. We can use copy + paste into a lesswrong comment to post it.
Any image produced by DALL-E which could also convey or be used to convey misalignment or other risks from AI would be very useful because it could combine the desired messages: “the AI problem is urgent,” and “misalignment is possible and dangerous.”
For example, if DALL-E responded to the prompt: “AI living with humans” by creating an image suggesting a hierarchy of AI over humans, it would serve both messages.
However, this is only worthy of a side note, because creating such suggested misalignment organically might be very difficult.
Other image prompts might be: “The world as AI sees it,” “the power of intelligence,” “recursive self-improvement,” “the danger of creating life,” “god from the machine,” etc.
both prompts are from here:
https://www.lesswrong.com/posts/uKp6tBFStnsvrot5t/what-dall-e-2-can-and-cannot-do#:~:text=DALL%2DE%20can’t%20spell,written%20on%20a%20stop%20sign.)
There’s a lot of good DALL-E images floating around lesswrong that point towards alignment significance. We can use copy + paste into a lesswrong comment to post it.