An AI designed to minimize human suffering would simply kill all humans: no humans, no human suffering
This seems too strong, I’d suggest changing “would” to “might” or “could”.
Also, at two different points in the FAQ you use almost identical language about sticking humans in jars. You may want to change up the wording slightly or make it clear that you recognize the redundancy (“as discussed in question blah...” might do it).
This seems too strong, I’d suggest changing “would” to “might” or “could”.
Also, at two different points in the FAQ you use almost identical language about sticking humans in jars. You may want to change up the wording slightly or make it clear that you recognize the redundancy (“as discussed in question blah...” might do it).
Or “designed to” to “designed only to”.