We consider a multi-agent optimization problem where agents aim to cooperatively minimize a sum of local objective functions subject to a global inequality constraint and a global state constraint set. In contrast to existing papers, we do not require that the objective, constraint functions, and state constraint sets are convex. We propose a distributed approximate dual subgradient algorithm to enable agents to asymptotically converge to a pair of approximate primaldual solutions over dynamically changing network topologies. Convergence can be guaranteed provided that the Slater's condition and strong duality property are satisfied.