We present a new algorithm for non-unitary approximate joint diagonalization (AJD), based on a “natural gradient”-type multiplicative update of the diagonalizing matrix, complemented by step-size optimization at each iteration. The advantages of the new algorithm over existing non-unitary AJD algorithms are in the ability to accommodate non-positive-definite matrices (compared to Pham’s algorithm), in the low computational load per iteration (compared to Yeredor’s AC-DC algorithm), and in the theoretically guaranteed convergence to a true (possibly local) minimum (compared to Ziehe et al.’s FFDiag algorithm).