In annotation overlay applications using augmented reality (AR), view management is widely used for improving readability and intelligibility of the annotations. In order to recognize the visible portions of objects in the user's view, the positions, orientations, and shapes of the objects should be known in the case of conventional view management methods. However, it is difficult for a wearable AR system to obtain the positions, orientations and shapes of objects because the target object is usually moving or non-rigid. In this paper, we propose a view management method to overlay annotations of moving or non-rigid objects for networked wearable AR. The proposed method obtains positions and shapes of target objects via a network in order to estimate the visible portions of the target objects in the user's view. Annotations are located by minimizing penalties related to the overlap of an annotation, occlusion of target objects, length of a line between the annotation and th...