Under the Eliezerian view, (the pessimistic view that is producing <10% chances of success). These approaches are basically doomed. (See logistic success curve)
Now I can’t give overwhelming evidence for this position. Whisps of evidence maybe, but not an overwheming mountain of it.
Under these sort of assumptions, building a container for an arbitrary superintelligence such that it has only 80% chance of being immediately lethal, and a 5% chance of being marginally useful is an achievment.
(and all possible steelmannings, that’s a huge space)
Under the Eliezerian view, (the pessimistic view that is producing <10% chances of success). These approaches are basically doomed. (See logistic success curve)
Now I can’t give overwhelming evidence for this position. Whisps of evidence maybe, but not an overwheming mountain of it.
Under these sort of assumptions, building a container for an arbitrary superintelligence such that it has only 80% chance of being immediately lethal, and a 5% chance of being marginally useful is an achievment.
(and all possible steelmannings, that’s a huge space)