3. AI which ultimately wants to not exist in future as a terminal goal. Fulfilling the task is on the simplest trajectory to non-existence
The first part of that sounds like it might self destruct. And if it doesn’t care about anything else...that could go badly. Maybe nuclear badly depending… The second part makes it make more sense though.
9. Ontological uncertainty about level of simulation.
So it stops being trustworthy if it figures out it’s not in a simulation? Or, it is being simulated?
The first part of that sounds like it might self destruct. And if it doesn’t care about anything else...that could go badly. Maybe nuclear badly depending… The second part makes it make more sense though.
So it stops being trustworthy if it figures out it’s not in a simulation? Or, it is being simulated?