There are many statistical testing methods that output what are essentially proofs; e.g. statements of the form “probability of a failure existing is at most 10^(-100)”. Why would this not be sufficient?
(Approximate orders of magnitude:)
Number of atoms in universe : 10^80
Number of atoms in a human being: 10^28
Number of humans that have existed: 10^10
Number of AGI-creating-level inventions expected to be made by humans: 10^0–10^1
Number of AGI-creating-level inventions expected to be made by 1% (10^-2) of the universe turned into computronium, with no more that human level thought-to-matter efficiency, extrapolating linearly: 10^(80 − 2 − 10 − 28) = 10^40.
Hmm, that doesn’t sound that bad, but we got from 10^(-100) to 10^(-60) really fast. Also, I don’t think Eliezer was talking about that kind of statistical method.
Yes, the last sentence is probably my real “objection”. (Well, I don’t object to your statements, I just don’t think that’s what Eliezer meant. Even if you run a non-statistical, deterministic theorem prover, using current hardware the probability of failure is much above 10^-100.)
The silly part of the comment was just a reminder (partly to myself) that AGI problems can span orders of magnitude so ridiculously outside the usual human scale that one can’t quite approximate (the number of atoms in the universe)^-1 as zero without thinking carefully about it.
(Approximate orders of magnitude:)
Number of atoms in universe : 10^80
Number of atoms in a human being: 10^28
Number of humans that have existed: 10^10
Number of AGI-creating-level inventions expected to be made by humans: 10^0–10^1
Number of AGI-creating-level inventions expected to be made by 1% (10^-2) of the universe turned into computronium, with no more that human level thought-to-matter efficiency, extrapolating linearly: 10^(80 − 2 − 10 − 28) = 10^40.
Hmm, that doesn’t sound that bad, but we got from 10^(-100) to 10^(-60) really fast. Also, I don’t think Eliezer was talking about that kind of statistical method.
I mean, I could easily make the 100 into a 400, so I don’t think this is that relevant.
Yes, the last sentence is probably my real “objection”. (Well, I don’t object to your statements, I just don’t think that’s what Eliezer meant. Even if you run a non-statistical, deterministic theorem prover, using current hardware the probability of failure is much above 10^-100.)
The silly part of the comment was just a reminder (partly to myself) that AGI problems can span orders of magnitude so ridiculously outside the usual human scale that one can’t quite approximate (the number of atoms in the universe)^-1 as zero without thinking carefully about it.