We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene-specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post-processing. A lightfield camera or a video camera forces a-priori choice in space-angle-time resolution. We demonstrate a single prototype which provides flexible post-capture abilities not possible using either a single-shot lightfield camera or a multi-frame video camera. We show several novel results includ...