Interval availability is a dependability measure defined by the fraction of time during which a system is in operation over a finite observation period. The computation of its distribution allows the user to ensure that the probability that its system will achieve a given availability level is high enough. As usual, the system is assumed to be modeled by a finite Markov process. We propose in this paper two new algorithms to compute this measure and we compare them with respect of the input parameters of the model, both through the storage requirement and the execution time points of view. We show that one of them is an improvement of a well known one. Both algorithms are based on the uniformization technique.