We present an image enhancement algorithm based on fusing the visual information present in two images of the same scene, captured with different exposure times. The main idea is to exploit the differences between the image degradations that affect the two images. On one hand the short-exposed image is less affected by motion blur, whereas the long-exposed image is less affected by noise. Different fusion rules are designed for the luminance and chrominance components to preserve the desirable properties from each input image. We also present a method for estimating the brightness transfer function between the input images. As no global blur PSF is assumed, our method can deal with blur from both camera and object motions. We demonstrate the algorithm by a series of experiments and simulations.