Monitoring surveillance in order to see if anyone is breaking rules seems to be quite a bounded task, and in fact is one that we are already in the process of automating (using our current AI systems, which are basically all bounded).
That seems true, but if this surveillance monitoring isn’t 100% effective, won’t you still need an agential police to deal with any threats that manage to evade the surveillance? Or do you buy Eric’s argument that we can use a period of “unopposed preparation” to make sure that the defense, even though it’s bounded, is still much more capable than any agential threat it might face?
Sorry, when I said “there are lots of other tasks that are not as clear”, I meant that there are a lot of other tasks relevant to policing and security that are not as clear, such as police to deal with threats that evade surveillance. I think the optimism here comes from our ability to decompose tasks, such that we can take a task that seems to require goal-directed agency (like “be the police”) and turn it into a bunch of subtasks that no longer look agential.
That seems true, but if this surveillance monitoring isn’t 100% effective, won’t you still need an agential police to deal with any threats that manage to evade the surveillance? Or do you buy Eric’s argument that we can use a period of “unopposed preparation” to make sure that the defense, even though it’s bounded, is still much more capable than any agential threat it might face?
Sorry, when I said “there are lots of other tasks that are not as clear”, I meant that there are a lot of other tasks relevant to policing and security that are not as clear, such as police to deal with threats that evade surveillance. I think the optimism here comes from our ability to decompose tasks, such that we can take a task that seems to require goal-directed agency (like “be the police”) and turn it into a bunch of subtasks that no longer look agential.