Augmented Reality (AR) enables users to visualize synthetic information overlaid on a real video stream. Such visualization is achieved by tools that vary depending on the underlying data or the task at hand. While some tools distort, filter or enhance the explored information little or no work has focused on the separation of style definitions and their mapping to scene objects. We target this separation based on a context rich scenegraph. Our research allows the definition of visualization tools independent on the data to be visualized or the application that will use them. We show how contextual information may add another hierarchical dimension to scene objects, and how this may in turn be used by hierarchical style definitions