We describe a software system which enables computergenerated soundscapes to be synthesised, spatialised and edited using a gestural interface. Iterative design and testing of the software interface has taken place in a walk-in, immersive, virtual-reality theatre. Sound spatialisation has been implemented for an 8-speaker array using a Vector Base Amplitude Planning algorithm. The software has been written in Java, JSyn and Java3D with native method calls to sound-cards and sensors.