In this paper we describe extensions to our work on ThinSight, necessary to scale the system to larger tabletop displays. The technique integrates optical sensors into existing off-the-shelf LCDs with minimal impact on the physical form of the display. This allows thin form-factor sensing that goes beyond the capabilities of existing multi-touch techniques, such as capacitive or resistive approaches. Specifically, the technique not only senses multiple fingertips, but outlines of whole hands and other passive tangible objects placed on the surface. It can also support sensing and communication with devices that carry embedded computation such as a mobile phone or an active stylus. We explore some of these possibilities in this paper. Scaling up the implementation to a tabletop has been non-trivial, and has resulted in modifications to the LCD architecture beyond our earlier work. We also discuss these in this paper, to allow others to make practical use of ThinSight.