We consider surveillance problems to be a set of system- adversary interaction problems in which an adversary can be modeled as a rational (selfish) agent trying to maximize his utility. We feel that appropriate adversary modeling can provide deep insights into the system performance and also clues for optimizing the system’s performance against the adversary. Further, we propose that system designers should exploit the fact that they can impose certain restrictions on the intruders and the way they interact with the system. The system designers can find the assumptions under which the surveillance system shall out-perform the intruder and then enforce those assumptions over the system-intruder interaction as part of a ‘scenario engineering’ approach. We study both these aspects using a game theoretic framework and undertake practical experiments to verify the proposed enhancements.
Vivek K. Singh, Mohan S. Kankanhalli