We present a distributed machine learning framework based on support vector machines that allows classification problems to be solved iteratively through parallel update algorithms with minimal communication overhead. Decomposing the main problem into multiple relaxed subproblems allows them to be simultaneously solved by individual computing units operating in parallel and having access to only a subset of the data. A sufficient condition is derived under which a synchronous, discrete-time gradient update algorithm converges to the approximate solution. We apply the proposed distributed learning framework in the context of automatic image tagging as a first processing layer. Initial results from corresponding experiments indicate that he proposed framework has favorable properties including efficiency, configurability, robustness, suitability for online learning, and low communication overhead.