In the presence of a heavy-tail noise distribution, regression becomes much more di cult. Traditional robust regression methods assume that the noise distribution is symmetric and they downweight the in uence of so-called outliers. When the noise distribution is asymmetric these methods yield strongly biased regression estimators. Motivated by data-mining problems for the insurance industry, we propose in this paper a new approach to robust regression that is tailored to deal with the case where the noise distribution is asymmetric. The main idea is to learn most of the parameters of the model using conditional quantile estimators which are biased but robust estimators of the regression, and to learn a few remaining parameters to combine and correct these estimators, to minimize the average squared error. Theoretical analysis and experiments show the clear advantages of the approach. Results are on arti cial data as well as real insurance data, using both linear and neural-network pre...