The realisation and evaluation of a musical key extraction algorithm that works directly on raw audio data is presented. Its implementation is based on models of human auditory perception and music cognition. It is straightforward and has minimal computing requirements. First, it computes a chromagram from non-overlapping 100 msecs time frames of audio; a chromagram represents the likelihood of the chroma occurrences in the audio. This chromagram is correlated with Krumhansl’s key profiles that represent the perceived stability of each chroma within the context of a particular musical key. The key profile that has maximum correlation with the computed chromagram is taken as the most likely key. An evaluation with 237 CD recordings of classical piano sonatas indicated a classification accuracy of 75.1%. By considering the exact, relative, dominant, sub-dominant and parallel keys as similar keys, the accuracy is even 94.1%.