Video streams are ubiquitous in applications such as surveillance, games, and live broadcast. Processing and analyzing these data is challenging because algorithms have to be efficient in order to process the data on the fly. From a theoretical standpoint, video streams have their own specificities ? they mix spatial and temporal dimensions, and compared to standard video sequences, half of the information is missing, i.e. the future is unknown. The theoretical part of our work is motivated by the ubiquitous use of the Gaussian kernel in tools such as bilateral filtering and mean-shift segmentation. We formally derive its equivalent for video streams as well as a dedicated expression of isotropic diffusion. Building upon this theoretical ground, we adapt a number of classical powerful algorithms to video streams: bilateral filtering, mean-shift segmentation, and anisotropic diffusion.