Current hidden Markov acoustic modeling for large vocabulary continuous speech recognition (LVCSR) relies on the availability of abundant labeled transcriptions. Given that speech labeling is both expensive and time-consuming while there is a huge amount of unlabeled data easily available nowadays, semi-supervised learning (SSL) from both labeled and unlabeled data which aims to reduce the development cost for LVCSR becomes more important than ever. In this paper, we propose SSL for LVCSR by using the multiple views learned from different acoustic features and randomized decision trees. In addition, we develop the multi-objective learning of HMM-based acoustic models by optimizing a hybrid criterion which is established by the combination of the discriminative mutual information from labeled data and the entropy from unlabeled data. Experiments conducted on Broadcast News show the benefits of proposed methods.