Any arguments for AI safety should be accompanied by images from DALL-E 2.
One of the key factors which makes AI safety such a low priority topic is a complete lack of urgency. Dangerous AI seems like a science fiction element, that’s always a century away, and we can fight against this perception by demonstrating the potential and growth of AI capability.
No demonstration of AI capability has the same immediate visceral power as DALL-E 2.
In longer-form arguments, urgency could also be demonstrated through GPT-3′s prompts, but DALL-E 2 is better, especially if you can also implicitly suggest a greater understanding of concepts by having DALL-E 2 represent something more abstract.
Any image produced by DALL-E which could also convey or be used to convey misalignment or other risks from AI would be very useful because it could combine the desired messages: “the AI problem is urgent,” and “misalignment is possible and dangerous.”
For example, if DALL-E responded to the prompt: “AI living with humans” by creating an image suggesting a hierarchy of AI over humans, it would serve both messages.
However, this is only worthy of a side note, because creating such suggested misalignment organically might be very difficult.
Other image prompts might be: “The world as AI sees it,” “the power of intelligence,” “recursive self-improvement,” “the danger of creating life,” “god from the machine,” etc.
There’s a lot of good DALL-E images floating around lesswrong that point towards alignment significance. We can use copy + paste into a lesswrong comment to post it.
Any arguments for AI safety should be accompanied by images from DALL-E 2.
One of the key factors which makes AI safety such a low priority topic is a complete lack of urgency. Dangerous AI seems like a science fiction element, that’s always a century away, and we can fight against this perception by demonstrating the potential and growth of AI capability.
No demonstration of AI capability has the same immediate visceral power as DALL-E 2.
In longer-form arguments, urgency could also be demonstrated through GPT-3′s prompts, but DALL-E 2 is better, especially if you can also implicitly suggest a greater understanding of concepts by having DALL-E 2 represent something more abstract.
(Inspired by a comment from jcp29)
Any image produced by DALL-E which could also convey or be used to convey misalignment or other risks from AI would be very useful because it could combine the desired messages: “the AI problem is urgent,” and “misalignment is possible and dangerous.”
For example, if DALL-E responded to the prompt: “AI living with humans” by creating an image suggesting a hierarchy of AI over humans, it would serve both messages.
However, this is only worthy of a side note, because creating such suggested misalignment organically might be very difficult.
Other image prompts might be: “The world as AI sees it,” “the power of intelligence,” “recursive self-improvement,” “the danger of creating life,” “god from the machine,” etc.
both prompts are from here:
https://www.lesswrong.com/posts/uKp6tBFStnsvrot5t/what-dall-e-2-can-and-cannot-do#:~:text=DALL%2DE%20can’t%20spell,written%20on%20a%20stop%20sign.)
There’s a lot of good DALL-E images floating around lesswrong that point towards alignment significance. We can use copy + paste into a lesswrong comment to post it.