In this paper, we discuss how our work on evaluating Semantic Web knowledge base systems (KBSs) contributes to address some broader AI problems. First, we show how our approach provides a benchmarking solution to the Semantic Web, a new application area of AI. Second, we discuss how the approach is also beneficial in a more traditional AI context. We focus on issues such as scalability, performance tradeoffs, and the comparison of different classes of systems. Benchmarking Semantic Web KBSs Our research interest is to develop objective and unbiased ways to evaluate Semantic Web knowledge base systems (KBSs) (See Guo, Pan and Heflin 2004). Specifically, we have conducted research on benchmarking KBSs that store, reason and query statements described in OWL1 , which is a standard language for describing and publishing Web ontologies. As a product of our work, we have developed the Lehigh University Benchmark (LUBM). The LUBM is, to the best of our knowledge, the first of its kind and ha...