Recent development in network visual communications has emphasized on the need of objective, reliable and easy-to-use video quality assessment (VQA) systems. This paper introduces a novel idea of quality-aware video (QAV), in which extracted features about the original video sequence are invisibly embedded into the same video data. When such a QAV sequence is distributed over an error-prone network, a network user who receives it can decode the hidden messages and use them to evaluate the quality degradations between the original and the received video sequences. Our first implementation of QAV employs 1) a novel reduced-reference VQA method based on a statistical model of natural video, and 2) a 3D discrete cosine transform-based data hiding algorithm. The proposed approach does not assume any prior knowledge about image distortions, and the simulation results demonstrate its potentials to be generalized for different types and degrees of image distortions.