I probably wouldn’t do it. If I am being compensated for writing the note, it means there is a person that is offering to pay me to write the note. Since this is a weird request, and I cannot be sure of the person’s motivation, as a safety precaution I would assume malice. Especially if the “compensation” is large. Like, nobody is their right mind would pay me a million dollars to put a note on the wall that says that I wish my family would die just to prove a point. I would assume malice. This person could be a psycho serial killer who would kill my family in two weeks in order to teach me “not to mess with magic” or something like that. Point is, when a weird person makes a weird request, I believe it’s better (safer) not to engage. I’d probably call my family and warn them to be careful too, just in case.
Mawrak
Yes, I realized it in the dream. Since it was a Lucid dream, I was fully aware that I am in a dream and remembered how the taste is supposed to be in reality, so I could compare on the fly.
I know for a fact that I see colors in dreams. When I have a Lucid dream I can experiment with my experiences, and I could confirm that I saw colors. I could also feel taste, cold, touch, hear sounds and sometimes experience pain (I once was stabbed in a dream and it hurt like hell, even for several minutes after I woke up). In fact, I found the amount of details objects had to be surprising—when I looked at a stone wall, it looked like a texture of a real wall. When I touched the wall, it felt like a stone wall. Other senses did not get the details as well—when I tired tasting snow, it was kinda cold-ish and kinda tasted like snow, but not really. When I tasted food, it tasted really weird and not really like I expected.
I am not an expert by any means, but here are my thoughts: While I find GPT-3 quite impressive, it’s not even close to AGI. All the models you mentioned are still focused on performing specific tasks. This alone will (probably) not be enough to create AGI, even if you try to increase the size of the models even further. I believe AGI is at least decades away, perhaps even a hundred years away. Now, there is a possibility of stuff being developed in secret, which is impossible to account for, but I’d say the probability of these developments being significantly more advanced that the publicly available technologies is pretty low.