Machine learning systems are deployed in many adversarial conditions like intrusion detection, where a classifier has to decide whether a sequence of actions come from a legitimate user or not. However, the attacker, being an adversarial agent, could reverse engineer the classifier and successfully masquerade as a legitimate user. In this paper, we propose the notion of a Proactive Intrusion Detection System (IDS) that can counter such attacks by incorporating feedback into the process. A proactive IDS influences the user's actions and observes them in different situations to decide whether the user is an intruder. We present a formal analysis of proactive intrusion detection and extend the adversarial relationship between the IDS and the attacker to present a game theoretic analysis. Finally, we present experimental results on real and synthetic data that confirm the predictions of the analysis.