Traditional shape from silhouette methods compute the 3D shape as the intersection of the back-projected silhouettes in the 3D space, the so called visual hull. However, silhouettes that have been obtained with background subtraction techniques often present miss-detection errors (produced by false negatives or occlusions) which produce incomplete 3D shapes. Our approach deals with miss-detections, false alarms, and noise in the silhouettes. We recover the voxel occupancy which describes the 3D shape by minimizing an energy based on an approximation of the error between the shape 2D projections and the silhouettes. Two variants of the projection – and as a result the energy – as a function of the voxel occupancy are proposed. One of these variants outperforms the other. The energy also includes a sparsity measure, a regularization term, and takes into account the visibility of the voxels in each view in order to handle self-occlusions.