Internet blogs provide forums for discussions within virtual communities, allowing readers to post comments on what they read. However, such comments may contain abuse, such as personal attacks, offensive remarks about race or religion, or commercial spam, all of which reduce the value of community discussion. Ideally, filters would promote civil discourse by removing abusive comments while protecting free speech by not removing any comments unnecessarily. In this paper, we investigate the use of user flags to train filters for this task, with the goal of empowering each community to enforce its own standards. We find encouraging results on experiments using a large corpus of blog comment data with real users flags. We conclude by proposing several novel deployment schemes for filters in this setting.
D. Sculley