Another important failure point: what if the AI actually IS friendly? That red wire, from the AI’s perspective, represents an enormous existential risk for the humans it wants to protect. So, it carefully removes the wire, and expends inordinate resources making sure that the wire is never subject to the slightest voltage.
With that prior, the primary hazard of a metor impact is not the fireball, or the suborbital debris, or the choking dust, or the subsequent ice age; humans might have orbital colonies or something. There’s a nonzero chance of survival. The primary risk is that it might crack the superconducting farraday cage around the bunker containing the Magical Doom Wire. Projects will be budgeted accordingly.
An otherwise Friendly AI with risk-assessment that badly skewed would present hazards far more exotic than accidental self-destruction at an inopportune time.
Another important failure point: what if the AI actually IS friendly? That red wire, from the AI’s perspective, represents an enormous existential risk for the humans it wants to protect. So, it carefully removes the wire, and expends inordinate resources making sure that the wire is never subject to the slightest voltage.
With that prior, the primary hazard of a metor impact is not the fireball, or the suborbital debris, or the choking dust, or the subsequent ice age; humans might have orbital colonies or something. There’s a nonzero chance of survival. The primary risk is that it might crack the superconducting farraday cage around the bunker containing the Magical Doom Wire. Projects will be budgeted accordingly.
An otherwise Friendly AI with risk-assessment that badly skewed would present hazards far more exotic than accidental self-destruction at an inopportune time.