Software engineers tend to repeat mistakes when developing software. Automated static analysis tools can detect some of these mistakes early in the software process. However, these tools tend to generate a significant number of false positive alerts. Due to the need for manual inspection of alerts, the high number of false positives may make an automated static analysis tool too costly to use. In this research, we propose to rank alerts generated from automated static analysis tools via an adaptive model that predicts the probability an alert is a true fault in a system. The model adapts based upon a history of the actions the software engineer has taken to either filter false positive alerts or fix true faults. We hypothesize that by providing this adaptive ranking, software engineers will be more likely to act upon highly ranked alerts until the probability that remaining alerts are true positives falls below a subjective threshold.