A robot with the ability to dance autonomously has many potential applications, such as serving as a prototype dancer for choreographers or as a participant in stage performances with human dancers. A robot that dances autonomously must be able to extract several features from audio in real time, including tempo, beat, and style. It must also be able to produce a continuous sequence of humanlike gestures. We chose the Hitec RoboNova to use as a robot platform in our work on these problems. We have developed a beat identification algorithm that can extract the beat positions from audio in real time for multiple consecutive songs. Our RoboNova can now produce sequences of smooth gestures that are synchronized with the predicted beats and match the tempo of the audio. Our algorithm can also be easily moved to the HUBO, a large humanoid robot that can move in a very humanlike manner.