Although they play an important role in any assessment procedure, web accessibility metrics are not yet well developed and studied. In addition, most metrics are geared towards conformance, and therefore are not well suited to answer questions whether the web site has critical barriers with respect to some user group. The paper addresses some open issues: how can accessibility be measured other than by conformance to certain guidelines? How can a metric merge results produced by accessibility evaluation tools and by expert reviewers? Does it consider error rates of the tool? How can a metric consider also severity of accessibility barriers? Can a metric tell us if a web site is more accessible for certain user groups rather than others? The paper presents a new methodology and associated metric for measuring accessibility that efficiently combine expert reviews with automatic evaluation of web pages. Examples and data drawn from tests performed on 1500 web pages are also presented.