We present here a new descriptor for depth images adapted to 2D/3D model matching and retrieving. We propose a representation of a 3D model by 20 depth images rendered from the vertices of a regular dodecahedron. One depth image of a 3D model is associated to a set of depth lines which will be afterward transformed into sequences. The depth sequence information provides a more accurate description of 3D shape boundaries than using other 2D shape descriptors. Similarity computing is performed when dynamic programming distance (DPD) is used to compare the depth line descriptors. The DPD leads to an accurate matching of sequences even in the presence of local shifting on the shape. Results on a large 3D database show efficiency of our 2D/3D approach.