This paper introduces Jump, a prototype computer vision-based system that transforms paper-based architectural documents into tangible query interfaces. Specifically, Jump allows a user to obtain additional information related to a given architectural document by framing a portion of the drawing with physical brackets. The framed area appears in a magnified view on a separate display and applies the principle of semantic zooming to determine the appropriate level of detail to show. Filter tokens can be placed on the paper to modify the digital presentation to include information not on the original drawing itself, such as electrical, mechanical, and structural information related to the given space. These filter tokens serve as tangible sliders in that their relative location on the paper controls the degree to which their information is blended with the original document. To address the issue of recognition errors, Jump introduces the notion of a reflection window, or an inset window...