Ehn. Kind of irrelevant to p(doom). War and violent conflict is disturbing, but not all that much more so with tool-level AI.
Especially in conflicts where the “victims” aren’t particularly peaceful themselves, it’s hard to see AI as anything but targeting assistance, which may reduce indiscriminate/large-scale killing.
I’m being heavily downvoted here, but what exactly did I say wrong? In fact I believe i said nothing wrong.
It does worsen situation with Israel military forces mass murdering Palestinian civilians due to AIs decisions with operators just rubber stamping the actions.
I can only speak for myself, but I downvoted for leaning very heavily on a current political conflict, because it’s notoriously difficult to reason about generalities due to the mindkilling effect of taking sides. The fact that I seem to be on a different side than you (though there ain’t no side that’s fully in the right—the whole idea of ethnic and religious hatred is really intractable) is only secondary.
I regret engaging on that level. I should have stuck with my main reaction that “individual human conflict is no more likely to lead to AI doom than nuclear doom”. It didn’t change the overall probability IMO.
I’m sorry but the presented example of Israel’s Lavender system shows the exact opposite: it exacerbates an already prevalent mass murder of innocent civilians to an even bigger degree, where operators just rubber stamp the decisions. I’m afraid in that example it does absolutely nothing to reduce the indiscriminate large-scale killing, but directly facilitates it. Its right there in the attached +972 Mag report.
And I’m sorry, but did you mean to call the currently targeted Palestinian civilians an unpeaceful “victims” (in quotes)? Because to me that just sounds barbaric and utterly insane and simply immoral, especially during the time of the ongoing Gazan genocide.
Ehn. Kind of irrelevant to p(doom). War and violent conflict is disturbing, but not all that much more so with tool-level AI.
Especially in conflicts where the “victims” aren’t particularly peaceful themselves, it’s hard to see AI as anything but targeting assistance, which may reduce indiscriminate/large-scale killing.
I’m being heavily downvoted here, but what exactly did I say wrong? In fact I believe i said nothing wrong.
It does worsen situation with Israel military forces mass murdering Palestinian civilians due to AIs decisions with operators just rubber stamping the actions.
Here is the +972 Mag Report: https://www.972mag.com/lavender-ai-israeli-army-gaza/
I highly advise you to read as it goes into higher details as to how it exactly internally works.
I can only speak for myself, but I downvoted for leaning very heavily on a current political conflict, because it’s notoriously difficult to reason about generalities due to the mindkilling effect of taking sides. The fact that I seem to be on a different side than you (though there ain’t no side that’s fully in the right—the whole idea of ethnic and religious hatred is really intractable) is only secondary.
I regret engaging on that level. I should have stuck with my main reaction that “individual human conflict is no more likely to lead to AI doom than nuclear doom”. It didn’t change the overall probability IMO.
I’m sorry but the presented example of Israel’s Lavender system shows the exact opposite: it exacerbates an already prevalent mass murder of innocent civilians to an even bigger degree, where operators just rubber stamp the decisions. I’m afraid in that example it does absolutely nothing to reduce the indiscriminate large-scale killing, but directly facilitates it. Its right there in the attached +972 Mag report.
And I’m sorry, but did you mean to call the currently targeted Palestinian civilians an unpeaceful “victims” (in quotes)? Because to me that just sounds barbaric and utterly insane and simply immoral, especially during the time of the ongoing Gazan genocide.