The phenomenal growth of video on the web and the increasing sparseness of meta information associated with it forces us to look for signals from the video content for search/information retrieval and browsing based corpus exploration. A large chunk of users' searching/browsing patterns are centered around people present in the video. Doing it at scale in videos remains hard due to a) the absence of labeled data for such a large set of people and b) the large variation of pose/illumination/expression/age/occlusion/qualityetc in the target corpus. We propose a system that can learn and recognize faces by combining signals from large scale weakly labeled text, image, and video corpora. First, consistency learning is proposed to create face models for popular persons. We use the text-image co-occurrence on the web as a weak signal of relevance and learn the set of consistent face models from this very large and noisy training set. Second, efficient and accurate face detection and fa...