My estimate is based on the structure of the problem and the entity trying to solve it. I’m not treating it as some black-box instance of “the dumbest thing can work”. I agree that the latter types of problem should be assigned more than 0.01%.
I already knew quite a lot about GPT-4′s strengths and weaknesses, and about the problem domain it needs to operate in for self-improvement to take place. If I were a completely uneducated layman from 1900 (or even from 2000, probably) then a probability of 10% or more might be reasonable.
My estimate is based on the structure of the problem and the entity trying to solve it. I’m not treating it as some black-box instance of “the dumbest thing can work”. I agree that the latter types of problem should be assigned more than 0.01%.
I already knew quite a lot about GPT-4′s strengths and weaknesses, and about the problem domain it needs to operate in for self-improvement to take place. If I were a completely uneducated layman from 1900 (or even from 2000, probably) then a probability of 10% or more might be reasonable.