That being said, there is probably such thing as too much hidden complexity. If most of the information in a given society is hidden, embodied by non-human intelligences, then life as a garden-variety human would be awfully boring.
That wouldn’t be my main worry. Setting aside general AI, one problem with smaller amounts of hidden complexity is that we rarely hide it completely successfully. Automation has failure modes. Abstractions leak. If some instance of complexity hiding is so successful that it is ok that the rare failures turn into research problems, that’s ok. If another instance is sufficiently fragile that its users regularly bump into its failures, and become familiar with how to recover from them (but where it is still reliable enough to be useful), that’s ok. The intermediate case, where it is reliable enough for most users to forget what is hidden, but fragile enough that recovering from its failures can’t be neglected—that’s a problem...
That wouldn’t be my main worry. Setting aside general AI, one problem with smaller amounts of hidden complexity is that we rarely hide it completely successfully. Automation has failure modes. Abstractions leak. If some instance of complexity hiding is so successful that it is ok that the rare failures turn into research problems, that’s ok. If another instance is sufficiently fragile that its users regularly bump into its failures, and become familiar with how to recover from them (but where it is still reliable enough to be useful), that’s ok. The intermediate case, where it is reliable enough for most users to forget what is hidden, but fragile enough that recovering from its failures can’t be neglected—that’s a problem...