Robot self-localization using a hemispherical camera system can be done without correspondences. We present a view-based approach using view descriptors, which enables us to efficiently compare the image signal taken at different locations. A compact representation of the image signal can be computed using Spherical Harmonics as orthonormal basis functions defined on the sphere. This is particularly useful because rotations between two representations can be found easily. Compact view descriptors stored in a database enable us to compute a likelihood for the current view corresponding to a particular position and orientation in the map.