A K∗ -sparse vector x∗ ∈ RN produces measurements via linear dimensionality reduction as u = Φx∗ + n, where Φ ∈ RM×N (M < N), and n ∈ RM consists of independent and identically distributed, zero mean Gaussian entries with variance σ2 . An algorithm, after its execution, determines a vector ˆx that has K-nonzero entries, and satisfies u − Φˆx ≤ . How far can ˆx be from x∗ ? When the measurement matrix Φ provides stable embedding to 2Ksparse signals (the so-called restricted isometry property), they must be very close. This paper therefore establishes worst-case bounds to characterize the distance ˆx − x∗ based on the online metainformation. These bounds improve the pre-run algorithmic recovery guarantees, and are quite useful in exploring various data error and solution sparsity trade-offs. We also evaluate the performance of some sparse recovery algorithms in the context of our bound.