The idea, if I parse correctly, is that in order to fail that hard you have to at least know part of what you’re doing, and automatic failures are always regular failures (Boom!). However, your implementation failed in some detail somewhere, and now the FAI is being weird. Not necessarily entirely bad, just something unexpected or not-quite-what-we-wanted.
I think the FAI Critical Success Table would look more like:
Roll 1d6.
Everyone immediately obtains universally consistent root access without the Core Wars, wherein the laws of the universe start literally bending to accomodate even the most contradictory concepts such as “Torture for 5^^^5 years 4^^^4 sentient beings, with them experiencing pain solely for my personal enjoyment” not generating any kind of negative utility for anyone, including the sentient beings being tortured, and actions that lower any utility below optimal levels simply being timelessly non-existent (i.e. there is always some reason why they’re never desired, never implemented, never enforced that also happens to be a strictly dominant strategy for all implemented agents).
2 … 6. (variations on the same theme of ultimate transdimentional / unboxed godhood)
The idea, if I parse correctly, is that in order to fail that hard you have to at least know part of what you’re doing, and automatic failures are always regular failures (Boom!). However, your implementation failed in some detail somewhere, and now the FAI is being weird. Not necessarily entirely bad, just something unexpected or not-quite-what-we-wanted.
I think the FAI Critical Success Table would look more like:
Roll 1d6.
Everyone immediately obtains universally consistent root access without the Core Wars, wherein the laws of the universe start literally bending to accomodate even the most contradictory concepts such as “Torture for 5^^^5 years 4^^^4 sentient beings, with them experiencing pain solely for my personal enjoyment” not generating any kind of negative utility for anyone, including the sentient beings being tortured, and actions that lower any utility below optimal levels simply being timelessly non-existent (i.e. there is always some reason why they’re never desired, never implemented, never enforced that also happens to be a strictly dominant strategy for all implemented agents).
2 … 6. (variations on the same theme of ultimate transdimentional / unboxed godhood)