Tried it a bit, and this doesn’t seem like a test that measures what we care about because the humans (at least some of them) are trying to fool you into thinking they’re bots. Consequently, even if you have a question that would immediately and reliably tell a human-honestly-trying-to-answer apart from a bot, you can’t with the game with this because humans won’t play along.
To make this meaningful, all human players should be trying to make others think they’re human.
Absolutely, for such tests to be effective, all participants would need to try to genuinely act as Humans. The XP system introduced by the site is a smart approach to encourage “correct” participation. However, there might be more effective incentive structures to consider?
For instance, advanced AI or AGI systems could leverage platforms like these to discern tactics and behaviors that make them more convincingly Human. If these AI or AGI entities are highly motivated to learn this information and have the funds, they could even pay Human participants to ensure honest and genuine interaction. These AI or AGI could then use this data to learn more useful and effective tactics to be able to pass as Humans (at least in certain scenarios).
See https://www.humanornot.ai/ , and its unofficial successor, https://www.turingtestchat.com/ . I’ve determined that I’m largely unable to tell whether I’m talking to a human or a bot within two minutes. :/
Tried it a bit, and this doesn’t seem like a test that measures what we care about because the humans (at least some of them) are trying to fool you into thinking they’re bots. Consequently, even if you have a question that would immediately and reliably tell a human-honestly-trying-to-answer apart from a bot, you can’t with the game with this because humans won’t play along.
To make this meaningful, all human players should be trying to make others think they’re human.
Absolutely, for such tests to be effective, all participants would need to try to genuinely act as Humans. The XP system introduced by the site is a smart approach to encourage “correct” participation. However, there might be more effective incentive structures to consider?
For instance, advanced AI or AGI systems could leverage platforms like these to discern tactics and behaviors that make them more convincingly Human. If these AI or AGI entities are highly motivated to learn this information and have the funds, they could even pay Human participants to ensure honest and genuine interaction. These AI or AGI could then use this data to learn more useful and effective tactics to be able to pass as Humans (at least in certain scenarios).