Great point—I’m not sure if that contained aspects which are similar enough to AI to resolve such a question. This source doesn’t think it counts as AI (though it doesn’t provide much of an argument for this) and I can’t find reference to machine learning or AI on the MCAS page, though clearly one could use AI tools to develop an automated control system like this and I don’t feel well positioned to judge whether it should count.
To clarify—I do not think MCAS specifically is an AI based system, I was just thinking of a hypothetical future similar system that does include a weak AI component, but where, similarly to ACAS the issue is not so much with the flaw in AI itself, but in how it is being used in a larger system.
In other words, I think your test needs to make a distinction between a situation where one needed a trustworthy AI, and the actual AI was unintentionally/unexpectedly untrustworthy vs a situation where perhaps the AI performed reasonably well, but the use of AI was problematic, causing a disaster anyway.
Boeing MCAS (https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Augmentation_System) is blaimed by more than 100 deaths. How much “AI” would a similar system need to include for a similar tragedy to count as “an event precipitated by AI”?
Great point—I’m not sure if that contained aspects which are similar enough to AI to resolve such a question. This source doesn’t think it counts as AI (though it doesn’t provide much of an argument for this) and I can’t find reference to machine learning or AI on the MCAS page, though clearly one could use AI tools to develop an automated control system like this and I don’t feel well positioned to judge whether it should count.
To clarify—I do not think MCAS specifically is an AI based system, I was just thinking of a hypothetical future similar system that does include a weak AI component, but where, similarly to ACAS the issue is not so much with the flaw in AI itself, but in how it is being used in a larger system.
In other words, I think your test needs to make a distinction between a situation where one needed a trustworthy AI, and the actual AI was unintentionally/unexpectedly untrustworthy vs a situation where perhaps the AI performed reasonably well, but the use of AI was problematic, causing a disaster anyway.