Suppose that Eliezer thinks there is a 99% risk of doom, and I think there is a 20% risk of doom.
Suppose that we solve some problem we both think of as incredibly important, like we find a way to solve ontology identification and make sense of the alien knowledge a model has about the world and about how to think, and it actually looks pretty practical and promising and suggests an angle of attack on other big theoretical problems and generally suggests all these difficulties may be more tractable than we thought.
If that’s an incredible smashing success maybe my risk estimate has gone down from 20% to 10%, cutting risk in half.
And if it’s an incredible smashing success maybe Eliezer thinks that risk has gone down from 99% to 98%, cutting risk by ~1%.
I think there are basically just two separate issues at stake:
How much does this help solve the problem? I think mostly captured by bits of log odds reduction, and not where the real disagreement is.
How much are we doomed anyway so it doesn’t matter?
Suppose that Eliezer thinks there is a 99% risk of doom, and I think there is a 20% risk of doom.
Suppose that we solve some problem we both think of as incredibly important, like we find a way to solve ontology identification and make sense of the alien knowledge a model has about the world and about how to think, and it actually looks pretty practical and promising and suggests an angle of attack on other big theoretical problems and generally suggests all these difficulties may be more tractable than we thought.
If that’s an incredible smashing success maybe my risk estimate has gone down from 20% to 10%, cutting risk in half.
And if it’s an incredible smashing success maybe Eliezer thinks that risk has gone down from 99% to 98%, cutting risk by ~1%.
I think there are basically just two separate issues at stake:
How much does this help solve the problem? I think mostly captured by bits of log odds reduction, and not where the real disagreement is.
How much are we doomed anyway so it doesn’t matter?