Lots of people when confronted with various reasons why AGI would be dangerous object that it’s all speculative, or just some sci-fi scenarios concocted by people with overactive imaginations. I think a rigorous, peer reviewed, authoritative proof would strengthen the position against these sort of objections.
I agree that a proof would be helpful, but probably not as impactful as one might hope. A proof of impossibility would have to rely on certain assumptions, like “superintelligence” or whatever, that could also be doubted or called sci-fi.
Now that you mention it, it does seem a bit odd that there hasn’t even been one rigorous, logically correct, and fully elaborated (i.e. all axioms enumerated) paper on this topic.
Or even a long post, there’s always something stopping it short of the ideal. Some logic error, some glossed over assumption, etc...
There’s a few papers on AI risks, I think they were pretty solid? But the problem is that however one does it, it remains in the realm of conceptual, qualitative discussion if we can’t first agree on formal definitions of AGI or alignment that someone can then Do Math on.
...qualitative discussion if we can’t first agree on formal definitions of AGI...
Yes, that’s part of what I meant by enumerating all axioms. Papers just assume every potential reader understands the same definition for ‘AGI’, ‘AI’, etc...
When clearly that is not the case. Since there isn’t an agreed on formal definition in the first place, that seems like the problem to tackle before anything downstream.
Well, that’s mainly a problem with not even having a clear definition of intelligence as a whole. We might have better luck with more focused definitions like a “recursive agent” (by which I mean, an agent whose world model is general enough to include itself).
Lots of people when confronted with various reasons why AGI would be dangerous object that it’s all speculative, or just some sci-fi scenarios concocted by people with overactive imaginations. I think a rigorous, peer reviewed, authoritative proof would strengthen the position against these sort of objections.
I agree that a proof would be helpful, but probably not as impactful as one might hope. A proof of impossibility would have to rely on certain assumptions, like “superintelligence” or whatever, that could also be doubted or called sci-fi.
No actually, assuming the machinery has a hard substrate and is self-maintaining is enough.
Now that you mention it, it does seem a bit odd that there hasn’t even been one rigorous, logically correct, and fully elaborated (i.e. all axioms enumerated) paper on this topic.
Or even a long post, there’s always something stopping it short of the ideal. Some logic error, some glossed over assumption, etc...
There’s a few papers on AI risks, I think they were pretty solid? But the problem is that however one does it, it remains in the realm of conceptual, qualitative discussion if we can’t first agree on formal definitions of AGI or alignment that someone can then Do Math on.
Yes, that’s part of what I meant by enumerating all axioms. Papers just assume every potential reader understands the same definition for ‘AGI’, ‘AI’, etc...
When clearly that is not the case. Since there isn’t an agreed on formal definition in the first place, that seems like the problem to tackle before anything downstream.
Well, that’s mainly a problem with not even having a clear definition of intelligence as a whole. We might have better luck with more focused definitions like a “recursive agent” (by which I mean, an agent whose world model is general enough to include itself).