things with p = near certainty unless civilization destroys itself (AGI at more than human level)
I have to object to this—you’re applying the awesomeness heuristic, in which things that would be awesome are judged to have higher probabilities.
It would be awesome if axions existed (at this point, generally speaking, anything outside the Standard Model would be awesome), especially if axions allowed us to construct an Epic Space Drive, so it’s nearly certain that axions exist!
More mundanely, it would be awesome if we could spew carbon dioxide into the atmosphere without deadly serious consequences (it would!), so it’s nearly certain that global warming is nonsense!
The unspoken assumption behind your claim is: “It would be awesome if superhuman artificial general intelligence was possible (it would!), and the universe is awesome (it is!), therefore the possibility is nearly certain.”
I suppose there are other names for the awesomeness heuristic, but the phrase “awesomeness heuristic” is itself awesome, QED.
We don’t know whether superhuman intelligence is possible. Perhaps it isn’t. Perhaps, just as anything that can be computed, can be computed by a Turing machine with enough time and memory, anything that can be understood, can be understood by a human with enough time and scratch paper. Arguments that human-level artificial general intelligence is impossible are clearly nonsense (“Data is awesome” is true but not relevant; the existence of human-level humans is an applicable counterexample), but the same can’t be said for superhuman-level.
Or, more disturbingly, perhaps it’s possible, but we’re too stupid to figure it out, even in a billion years. Ignoring evolution, cats aren’t going to figure out how to prove Fermat’s Last Theorem, after all.
I personally believe that superhuman artificial general intelligence is probably possible, and that we’ll probably figure it out (so we should figure out ahead of time how to make sure that AI isn’t the last thing we ever figure out), but I certainly wouldn’t say that it’s “nearly certain”. That’s a very strong claim to make.
I don’t call it near certainty because it would be awesome. I call it near certainty because it would be implausible given what we know for mildly superhuman intelligence to be impossible, and even implausible for it to be supremely hard, and because we are on the technological cusp of being able to brute-force a duplicate (via uploads) that could easily be made mildly superhuman, even as we are also on the scientific cusp of being able to understand how the internal design of the brain works.
Hugely superhuman intelligence is of course another ball of wax, but even that I would rate as “hard to argue against”.
I have to object to this—you’re applying the awesomeness heuristic, in which things that would be awesome are judged to have higher probabilities.
It would be awesome if axions existed (at this point, generally speaking, anything outside the Standard Model would be awesome), especially if axions allowed us to construct an Epic Space Drive, so it’s nearly certain that axions exist!
More mundanely, it would be awesome if we could spew carbon dioxide into the atmosphere without deadly serious consequences (it would!), so it’s nearly certain that global warming is nonsense!
The unspoken assumption behind your claim is: “It would be awesome if superhuman artificial general intelligence was possible (it would!), and the universe is awesome (it is!), therefore the possibility is nearly certain.”
I suppose there are other names for the awesomeness heuristic, but the phrase “awesomeness heuristic” is itself awesome, QED.
We don’t know whether superhuman intelligence is possible. Perhaps it isn’t. Perhaps, just as anything that can be computed, can be computed by a Turing machine with enough time and memory, anything that can be understood, can be understood by a human with enough time and scratch paper. Arguments that human-level artificial general intelligence is impossible are clearly nonsense (“Data is awesome” is true but not relevant; the existence of human-level humans is an applicable counterexample), but the same can’t be said for superhuman-level.
Or, more disturbingly, perhaps it’s possible, but we’re too stupid to figure it out, even in a billion years. Ignoring evolution, cats aren’t going to figure out how to prove Fermat’s Last Theorem, after all.
I personally believe that superhuman artificial general intelligence is probably possible, and that we’ll probably figure it out (so we should figure out ahead of time how to make sure that AI isn’t the last thing we ever figure out), but I certainly wouldn’t say that it’s “nearly certain”. That’s a very strong claim to make.
I don’t call it near certainty because it would be awesome. I call it near certainty because it would be implausible given what we know for mildly superhuman intelligence to be impossible, and even implausible for it to be supremely hard, and because we are on the technological cusp of being able to brute-force a duplicate (via uploads) that could easily be made mildly superhuman, even as we are also on the scientific cusp of being able to understand how the internal design of the brain works.
Hugely superhuman intelligence is of course another ball of wax, but even that I would rate as “hard to argue against”.