Today many formalisms exist for specifying complex Markov chains. In contrast, formalisms for specifying rewards, enabling the analysis of long-run average performance properties, have remained quite primitive. Basically, they only support the analysis of relatively simple performance metrics that can be expressed as long-run averages of atomic rewards, i.e. rewards that are deductible directly from the individual states of the initial Markov chain specification. To deal with complex performance metrics that are dependent on the accumulation of atomic rewards over sequences of states, the initial specification has to be extended explicitly to provide the required state information. To solve this problem, we introduce in this paper a new formalism of temporal rewards that allows complex quantitative properties to be expressed in terms of temporal reward formulas. Together, an initial (discrete-time) Markov chain and the temporal reward formulas implicitly define an extended Markov chai...