Camera networks have gained increased importance in
recent years. Previous approaches mostly used point correspondences
between different camera views to calibrate
such systems. However, it is often difficult or even impossible
to establish such correspondences. In this paper, we
therefore present an approach to calibrate a static camera
network where no correspondences between different camera
views are required. Each camera tracks its own set of
feature points on a commonly observed moving rigid object
and these 2D feature trajectories are then fed into our algorithm.
By assuming the cameras can be well approximated
with an affine camera model, we show that the projection of
any feature point trajectory onto any affine camera axis is
restricted to a 13-dimensional subspace. This observation
enables the computation of the camera calibration matrices,
the coordinates of the tracked feature points, and the
rigid motion of the object with a non-iterative trilinear factori...