We consider camera self-calibration, i.e. the estimation of parameters for camera sensors, in the setting of a visual sensor network where the sensors are distributed and energy-constrained. With the objective of reducing the communication burden and thereby maximizing network lifetime, we propose an energy-efficient approach for self-calibration where feature points are extracted locally at the cameras and efficient descriptions for these features are transmitted to a central processor that performs the self-calibration. Specifically, in this work we use reduced-dimensionality quantized approximations as efficient feature descriptors. The effectiveness of the proposed technique is validated through feature matching, and epipolar geometry estimation which enable self-calibration of the network.