In this paper, we develop a general model, called Latency-Rate servers (LR servers), for the analysis of traffic scheduling algorithms in broadband packet networks. The behavior of an LR server is determined by two parameters--the latency and the allocated rate. Several well-known scheduling algorithms, such as Weighted Fair Queueing, VirtualClock, SelfClocked Fair Queueing, Weighted Round Robin, and Deficit Round Robin, belong to the class of LR servers. We derive tight upper bounds on the end-to-end delay, internal burstiness, and buffer requirements of individual sessions in an arbitrary network of LR servers in terms of the latencies of the individual schedulers in the network, when the session traffic is shaped by a token bucket. The theory of LR servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models.