Distributed constraint satisfaction, in its most general acceptation, involves a collection of agents solving local constraint satisfaction subproblems, and a communication protocol between agents, in order to allow the distributed system converge to a global solution. The literature, however, often concentrates on the reduction where each agent owns exactly one variable, under the rationale that the corresponding algorithms are easily extended to the most general case. While this is mostly true, the specificities of agents handling local CSPs give way to numerous improvements, since a tradeoff becomes possible between local and distributed search effort. In this paper, we seek to improve nogood learning and solver cooperation in multi-variables distributed constraint satisfaction problems. We propose incremental improvements to be implemented on top of an ABT-like algorithm, and make experimental evaluations of the performance improvement they bring.