With so much apparently available energy/effort for eliezer-centered-improvement initiatives (like the $100,000 bounty mentioned in this post), I’d like to propose that we seriously consider cloning Eliezer.
From a layman/outsider perspective, it seems the hardest thing would be keeping it a secret so as to avoid controversy and legal trouble, since from a technical perspective it seems possible and relatively cheap. EA folks seem well connected and capable of such coordination, even under the burden of secrecy and keeping as few people “in the know” as possible.
Partially related: (in the category of comparatively off-the-wall—but nonviolent—AI alignment strategies): at some point there was a suggestion that MIRI pay $10mil (or some such figure) to Terence Tao (or some such prodigy) to help with alignment work. Eliezer replied thus:
We’d absolutely pay him if he showed up and said he wanted to work on the problem. Every time I’ve asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don’t interest them. We have already extensively verified that it doesn’t particularly work for eg university professors.
I’d love to see more visibility into proposed strategies like these (i.e. strategies surrounding/above the object-level strategy of “everyone who can do alignment research puts their head down and works”, and the related: “everyone else make money in their comparative specialization/advantage and donate to MIRI/FHI/etc”). Even visibility into why various strategies were shot down would be useful, and a potential catalyst for farming further ideas from the community. (even if—for game theoretic reasons—one may never be able to confirm that an idea has been tried, as in my cloning suggestion)
With so much apparently available energy/effort for eliezer-centered-improvement initiatives (like the $100,000 bounty mentioned in this post), I’d like to propose that we seriously consider cloning Eliezer.
From a layman/outsider perspective, it seems the hardest thing would be keeping it a secret so as to avoid controversy and legal trouble, since from a technical perspective it seems possible and relatively cheap. EA folks seem well connected and capable of such coordination, even under the burden of secrecy and keeping as few people “in the know” as possible.
Partially related: (in the category of comparatively off-the-wall—but nonviolent—AI alignment strategies): at some point there was a suggestion that MIRI pay $10mil (or some such figure) to Terence Tao (or some such prodigy) to help with alignment work. Eliezer replied thus:
I’d love to see more visibility into proposed strategies like these (i.e. strategies surrounding/above the object-level strategy of “everyone who can do alignment research puts their head down and works”, and the related: “everyone else make money in their comparative specialization/advantage and donate to MIRI/FHI/etc”). Even visibility into why various strategies were shot down would be useful, and a potential catalyst for farming further ideas from the community. (even if—for game theoretic reasons—one may never be able to confirm that an idea has been tried, as in my cloning suggestion)
Meta level: Why on earth would you say “Here is my secret idea, internet”? That doesn’t make any sense to me
Many such cases.