I’m aiming for sublinear overhead (so the proportional overhead falls to 0 as the AI becomes more complex). If you told me that overhead was a constant, like 1x or 10x the cost of the unaligned AI, that would make me pessimistic about the approach (with the degree of pessimism depending on the particular constant). It wouldn’t be doomed per se but it would qualify for winning the prize. If you told me that the overhead grew faster than the cost of the unaligned AI, I’d consider that doom.
I’m aiming for sublinear overhead (so the proportional overhead falls to 0 as the AI becomes more complex). If you told me that overhead was a constant, like 1x or 10x the cost of the unaligned AI, that would make me pessimistic about the approach (with the degree of pessimism depending on the particular constant). It wouldn’t be doomed per se but it would qualify for winning the prize. If you told me that the overhead grew faster than the cost of the unaligned AI, I’d consider that doom.