Neural gas (NG) constitutes a very robust clustering algorithm which can be derived as stochastic gradient descent from a cost function closely connected to the quantization error. In the limit, an NG network samples the underlying data distribution. Thereby, the connection is not linear, rather, it follows a power law with magnification exponent different from the information theoretically optimum one in adaptive map formation. There exists a couple of schemes to explicitely control the exponent such as local learning which leads to a small change of the learning algorithm of NG. Batch NG constitutes a fast alternative optimization scheme for NG vector quantizers which has been derived from the same cost function and which constitutes a fast Newton optimization scheme. It possesses the same magnification factor (different from 1) as standard online NG. In this paper, we propose a method to integrate magnification control by local learning into batch NG. Thereby, the key observation i...