I keep seeing odd bits of information from the grapevine about how Superintelligent Agents are on the horizon, with the latest being this Axios article. While I’m still unsure what to think, I started considering the possibility that these agents might already exist in an early form and are being tested online in secret. If that’s the case, could we detect them somehow?
Here are a few possible domains where such agents might already be tested:
1. Anonymous High-Performing Investors
Through platforms that allow pseudonymous accounts to trade stocks, cryptocurrencies, or other assets, an AI agent could potentially execute trades with an unusual level of insight and timing. If one were to notice an otherwise account consistently achieving market-beating returns—beyond what can be explained by luck or insider knowledge—it might hint at something like this.
2. Dominant New Gaming Accounts
We have already seen AI research before, such as OpenAI’s Dota AI or DeepMind’s AlphaStar Starcraft AI. A superhuman AI tested covertly on the internet might show up as a new account with inexplicable performance. For example, a sudden top-ranking player with flawless tactics. Although cheats and bots already exist, a genuine superhuman AI might adapt in real time, learn strategies from human opponents, and systematically dominate high-level competition without obvious cheating.
3. Mysterious GitHub Repositories with Pristine Code
Imagine a newly created GitHub repository featuring code that is exceptionally clean, well-structured, and updated at a rapid pace—perhaps multiple substantial commits in a single day, all made by a single, seemingly tireless account. Moreover, these updates might involve sophisticated algorithms or solutions to problems that typically require large teams or specialized domain expertise.
4. Other Platforms
Competitive programming websites, hacking contests, puzzle-solving communities, e-sports tournaments, or even social media accounts that consistently produce highly engaging content without a predictable “bot” posting style could, also be testing grounds.
There are probably some methods one could use to analyse those domains for outlier performance, but I’m not entirely certain the premise holds water. Thoughts?
If you had a really superhuman agent, and you wanted to hide it, why would you blow your cover by playing silly games or making obvious GitHub commits? It’s already SOP for social media bots to hide behind many accounts (and use many styles). So unless you have access to a lot of investment information that’s typically kept confidential...
Even in stuff like cracking into other people’s computers, you’d want to avoid being extremely obvious.