I agree with you and XiXiDu that such observation should be possible in principle, but I also sort of agree with the detractors. You say,
Presumably developers of a large complicated AI will design it to be easy to debug...
Oh, I’m sure they’d try. But have you ever seen a large software project ? There’s usually mountains and mountains of code that runs in parallel on multiple nodes all over the place. Pieces of it are usually written with good intentions in mind; other pieces are written in a caffeine-fueled fog two days before the deadline, and peppered with years-old comments to the extent of, “TODO: fix this when I have more time”. When the code breaks in some significant way, it’s usually easier to write it from scratch than to debug the fault.
And that’s just enterprise software, which is orders of magnitude less complex than an AGI would be. So yes, it should be possible to write transparent and easily debuggable code in theory, but in practice, I predict that people would write code the usual way, instead.
I agree with you and XiXiDu that such observation should be possible in principle, but I also sort of agree with the detractors. You say,
Oh, I’m sure they’d try. But have you ever seen a large software project ? There’s usually mountains and mountains of code that runs in parallel on multiple nodes all over the place. Pieces of it are usually written with good intentions in mind; other pieces are written in a caffeine-fueled fog two days before the deadline, and peppered with years-old comments to the extent of, “TODO: fix this when I have more time”. When the code breaks in some significant way, it’s usually easier to write it from scratch than to debug the fault.
And that’s just enterprise software, which is orders of magnitude less complex than an AGI would be. So yes, it should be possible to write transparent and easily debuggable code in theory, but in practice, I predict that people would write code the usual way, instead.