The process of evaluating, classifying, and assigning bugs to programmers is a difficult and time consuming task which greatly depends on the quality of the bug report itself. It has been shown that the quality of reports originating from bug trackers or ticketing systems can vary significantly. In this research, we apply Information Retrieval (IR) and Natural Language Processing (NLP) techniques for mining bug repositories. We focus particularly on measuring the quality of the free form descriptions submitted as part of bug reports used by open source bug trackers. Properties of natural language influencing the report quality are automatically identified and applied as part of a classification task. The results from the automated quality assessment are used to populate and enrich our existing software engineering ontology to support a further analysis of the quality and maturity of bug trackers.