AI community building: EliezerKart

Having good relations between the various factions of AI research is key to achieving our common goal of a good future. Therefore, I proposal an event to help bring us all together: EliezerKart! It is a go karting competition between three factions: AI capabilities researchers, AI existential safety researchers, and AI bias and ethics researchers.

The word Eliezer means “Help of my God” in Hebrew. The idea is whichever team is the best will have the help of their worldview, “their god”, during the competition. There is no relation to anyone named Eliezer whatsoever.

Using advanced deepfake technology, I have created a visualization of a Paul Christiano and Eliezer Yudkowksy team.

The race will probably take place in the desert or some cool city or something.

Factions

Here is a breakdown of the three factions:

Capabilities

They are the most straight forward faction, but also the most technical. They can use advanced AI to create go kart autopilot, can simulate millions of races courses in advance to create the perfect cart, and can use GPT to couch their drivers. Unfortunately, they are not good at getting things right on the first critical try.

Safety

Safety has two overlapping subfactions.

Rationalists

Rationalists can use conditional prediction markets (kind of like a Futarchy) and other forecasting techniques to determine the best drivers, the best learning methods, etc… They can also use rationality to debate go kart driving technique much more rationally than the other factions.

Effective Altruists

The richest faction, they can pay for the most advanced go karts. However, they will spend months debating the metrics upon which to rate how “advanced” a go kart is.

Safety also knows how to do interpretability, which can create adversarial examples to throw off capabilities.

Bias and ethics

The trickiest faction, they can lobby the government to change the laws and the rules of the event ahead of time, or even mid-race. They can also turn the crowd against their competitors. They can also refuse to acknowledge the power of the AI used by capabilities altogether; whether their AI will care remains to be seen.

Stakes

Ah, but this isn’t simply a team building exercise. There are also “prizes” in this race. Think of it kind of like a high stakes donor lottery.

  • If capabilities wins:

    • The other factions can not comment on machine learning unless they spend a week trying to train GANs.

    • Safety must inform capabilities of any ideas they have that can help create an even more helpful, harmless, and most importantly profitable assistant.

    • Bias and ethics must join the “safety and PR” departments of the AI companies.

  • If safety wins:

    • Everyone gets to enjoy a nice long AI summer!

    • Capabilities must spend a third of their time on interpretability and another third on AI approaches that are not just big inscrutable arrays of numbers.

    • Bias and ethics must only do research on if AI is biased towards paperclips, and their ethics teams must start working for the effective altruists, particularly on the “is everyone dying ethical?” question.

    • Bias and ethics must lobby the government to air strike all the GPU data centers.

  • If bias and ethics win:

    • Every capabilities researcher will have a bias and ethics expert sit behind them while they work. Anytime the capabilities researcher does something just because they can, the bias and ethics expert whispers technology is never neutral and the capabilities researcher’s car is replaced by one that is 10% cheaper.

    • AI safety researchers must convert from their Machine God religion to atheism. They must also commit to working on an alignment strategy that, instead of maximizing CEV, minimizes the number of naughty words in the universe.

    • Capabilities must create drones with facial recognition technology that follow the AI safety and AI capabilities factions around and stream their lives to Twitch.tv.

So what do you think? Game on?