What’s a story you can suggest where this occurs? Not one from a novel, but one you can see happening?
Yair Halberstadt
I think lawfare tends to involve civil not criminal cases?
The reason Chinese individuals have increasing quality of life, seems to be far more because they manufacture things cheaply than any sort of cashing in they do.
If you are a cold hearted utilitarian, the answer is no, we should not massively increase the death penalty, because it will worsen the diplomatic situation, increase racial hostilities, and move the Overton window on “killing vile people,” such that assassinations, coups and other instability would feel less shocking and be more likely to succeed.
it will worsen the diplomatic situation
Which diplomatic situation? Many countries execute criminals, and that doesn’t much impact their foreign affairs. Foreign affairs is driven by things that impact other countries, not domestic affairs.
increase racial hostilities
Did you know that black people in the USA are more likely to advocate for a tough on crime policy? After all, most crime is intraracial, and therefore it’s them who have to live with it.
“killing vile people,” such that assassinations, coups and other instability would feel less shocking and be more likely to succeed
Or it’ll move the overton window that killing people will result in a swift execution such that people are less likely to do it. Are China and Iran less stable today as a result of their liberal application of the death penalty—or is that basically completely irrelevant to their problems? This is a claim which you could argue whichever way you want depending on how you felt like it. Making assertions is not evidence.
If you are a normal human, the answer is also no, because people don’t deserve to die. A child who is unlucky enough to be born with bad genes or brain connections, does not deserve to be later executed when he grows up. A child raised in a bad environment also does not deserve to be later executed when he grows up. Finally, an adult who had bad luck does not deserve to be executed.
Do people deserve to go to prisons? Do people deserve to be punished at all? If not, and we should allow criminals to get off scott free, do people deserve to have their houses broken into, or to be threatened as they walk down the street?
We cannot simply abdicate responsibility by stating that people don’t deserve death. Nobody deserves anything, crime is real, we have to do something, why is the death penalty suddenly the thing that we should only do if someone deserves it?
Most people who choose cryopreservation only believe the chance of being revived is about 5 percent. I think we have to treat cryopreservation as killing someone.
The case for corporal punishment
Because in most cases it’s very clear what happened and the court case is most legal about all the legal quibbles and mitigating factors and etc.
If you don’t have eyewitness evidence or similar, sure don’t kill them, if they’re guilty they’re likely to commit another crime soon and then you’ll get them.
If you do, I don’t really care about the quibbles.
I think for similar reasons trade in ivory from dead anyways elephants is severely restricted.
The case for the death penalty
Why do you think wise AI advisors avoid the general problems with other AI?
Is that a wise AI, which is an advisor, or somebody who advises about AI who is wise?
Nonpartisan AI safety
I don’t see how you’ve shown it’s a bad assumption?
That violates assumption one (a single pass cannot produce super intelligent output).
This isn’t a solution to aligned LLMs being abused by humans, but to unaligned LLMs abusing humans.
context retrieval, reasoning models
Both of these are examples of more intelligent systems built on top of an LLM where the LLM itself has no state.
Thirdly, the AI can use the environment as state in ways which would be nearly impossible to fully trace or mitigate.
Again same problem—AI tool use is mediated by text input and output, and the validator just needs access to the LLM input and output.
To take an extreme example: killing all humans, and the plan to kill all humans.
But the AI isn’t going to be so stupid as to write the plan in plaintext, so it’ll likely develop some sort of code.
So then the unaligned behaviour is writing in code, and the output that leads to it is developing use the code.
But maybe the LLM can one shot the code without needing to develop it? Sure but then the validator (by definition if it’s the same LLM) can also tell the code has been developed.
Maybe the LLM can one shot a code to jailbreak the validator? Sure but that starts to break the assumption the LLM isn’t super intelligent in a single pass.
I would only use the death penalty where we’re close to certain X actually committed the crime. That’s fairly common for shoplifting and murder, but unfortunately far less common in rape (unless it’s e.g. on a street with CCTV cameras). I guess for probable rape I probably wouldn’t impose the death penalty and hope that they’ll get caught for a violent crimes later.