We propose a novel approach to music emotion recognition by combining standard and melodic features extracted directly from audio. To this end, a new audio dataset organized similarly to the one use in MIREX mood task comparison was created. From the data, 253 standard and 98 melodic features are extracted and used with several supervised learning techniques. Results show that generally melodic features perform better than standard audio. The best result, 64% f-measure, was obtained with only 11 features (9 melodic and 2 standard), obtained with ReliefF feature selection and support vector machines.