Functionally epsilon at the lower end of alignment difficulty (The optimistic scenario), and a maximum of 10% in the medium difficulty scenario.
So AI risk deserves to be taken seriously, but much longer optimistic tails exist, and one can increase capabilities without much risk.
Functionally epsilon at the lower end of alignment difficulty (The optimistic scenario), and a maximum of 10% in the medium difficulty scenario.
So AI risk deserves to be taken seriously, but much longer optimistic tails exist, and one can increase capabilities without much risk.