We have automated a lot of things in the past couple of 100 years with unaccountable machines and unaccountable software, and the main difference with ML seems to be that it’s less interpretable.
I strongly disagree with this characterization. Traditional software is perfectly accountable. Its behavior is possible to exactly understand and predict from its implementation and semantics. This is a huge part of what makes it so damn useful. It reifies the process of specifying rules for behavior; where bad behavior is found, the underlying rule responsible for it can be identified, fixes can be considered in terms of their global implications on program behavior, and then the fixes can be implemented with perfect reliability (assuming that the abstractions all hold). This is in my view the best possible manifestation of accountability within a system. Accountability for the system, of course, lies with its human creators and operators. (The creators may not fully understand how to reason about the system, but the fact remains that it is possible to create systems which are perfectly predictable and engineerable.) So system-internal accountability is much better than with humans, and system-external accountability is no worse.
Oh yeah one more thing. You say
I strongly disagree with this characterization. Traditional software is perfectly accountable. Its behavior is possible to exactly understand and predict from its implementation and semantics. This is a huge part of what makes it so damn useful. It reifies the process of specifying rules for behavior; where bad behavior is found, the underlying rule responsible for it can be identified, fixes can be considered in terms of their global implications on program behavior, and then the fixes can be implemented with perfect reliability (assuming that the abstractions all hold). This is in my view the best possible manifestation of accountability within a system. Accountability for the system, of course, lies with its human creators and operators. (The creators may not fully understand how to reason about the system, but the fact remains that it is possible to create systems which are perfectly predictable and engineerable.) So system-internal accountability is much better than with humans, and system-external accountability is no worse.