Based on the coherence principle of de Finetti and a related notion of generalized coherence (g-coherence), we adopt a probabilistic approach to uncertainty based on conditional probability bounds. Our notion of g-coherence is equivalent to the ”avoiding uniform loss” property for lower and upper probabilities (a la Walley). Moreover, given a g-coherent imprecise assessment by our algorithms we can correct it obtaining the associated coherent assessment (in the sense of Walley and Williams). As is well known, the problems of checking g-coherence and propagating tight g-coherent intervals are NP− and FPNP−complete, respectively, and thus NP−hard. Two notions which may be helpful to reduce computational effort are those of non relevant gain and basic set. Exploiting them, our algorithms can use linear systems with reduced sets of variables and/or linear constraints. In this paper we give some insights on the notions of non relevant gain and basic set. We consider several famil...