It’s going to turn out that having a llm ask “but are you sure that’s correct?” 7 times increases the reflective consistency of peoples’ votes, and I am going to lmao.
You say this like it’s a devastating putdown, or invalidates my idea. I think you’re right that this is what would happen, but I think hooking that up to a state-of-the-art language model could cure schizophrenic delusions, mass delusions like 9/11 trutherism, and all sorts of amazing things like that. Maybe not right away (schizophrenics are not generally known for engaging with reality diligently and aggressively). But over time, this is a capability I predict would emerge from the system I am trying to build. Why should anyone believe that? I don’t know.
It’s going to turn out that having a llm ask “but are you sure that’s correct?” 7 times increases the reflective consistency of peoples’ votes, and I am going to lmao.
You say this like it’s a devastating putdown, or invalidates my idea. I think you’re right that this is what would happen, but I think hooking that up to a state-of-the-art language model could cure schizophrenic delusions, mass delusions like 9/11 trutherism, and all sorts of amazing things like that. Maybe not right away (schizophrenics are not generally known for engaging with reality diligently and aggressively). But over time, this is a capability I predict would emerge from the system I am trying to build. Why should anyone believe that? I don’t know.
No, not laughing at you, laughing at the absurdity of the human critter, the fact that rubberducking works and so on.