Agree, I think the safety-critical vs not-safety-critical distinction is better for sorting out what semi-reliable AI systems will/won’t be useful for.
If there’s a death penalty at play, I’d say yeah (though ofc traditionally “safety-critical” is used to refer to engineering systems only). But if it’s a traffic ticket at play, I’d say no.
I’m going by something like the Wikipedia definition, where a safety-critical system is “a system whose failure or malfunction may result in one (or more) of the following outcomes: (a) death or serious injury to people, (b) loss or severe damage to equipment/property, and/or (c) [severe] environmental harm”.
Agree, I think the safety-critical vs not-safety-critical distinction is better for sorting out what semi-reliable AI systems will/won’t be useful for.
Is law (AI lawyer) safety critical?
If there’s a death penalty at play, I’d say yeah (though ofc traditionally “safety-critical” is used to refer to engineering systems only). But if it’s a traffic ticket at play, I’d say no.
I’m going by something like the Wikipedia definition, where a safety-critical system is “a system whose failure or malfunction may result in one (or more) of the following outcomes: (a) death or serious injury to people, (b) loss or severe damage to equipment/property, and/or (c) [severe] environmental harm”.