Since attackers trust computer systems to tell them the truth, it may be effective for those systems to lie or mislead. This could waste the attacker's resources while permitting time to organize a better defense, and would provide a second line of defense when access controls have been breached. We propose here a probabilistic model of attacker beliefs in each of a set of "generic excuses" (including deception) for their inability to accomplish their goals. We show how the model can be updated by evidence presented to the attacker and feedback from the attacker's own behavior. We show some preliminary results with human subjects supporting our theory. We show how this analysis permits choosing appropriate times and methods to deceive the attacker.
Neil C. Rowe