A powerful approach to search is to try to learn a distribution of good solutions (in particular of the dependencies between their variables) and use this distribution as a basis to sample new search points. Existing algorithms learn the search distribution directly on the given problem representation. We ask how search distributions can be modeled indirectly by a proper choice of factorial genetic code. For instance, instead of learning a direct probabilistic model of the dependencies between variables (like BOA does), one can alternatively learn a genetic representation of solutions on which these dependencies vanish. We consider questions like: Can every distribution be induced indirectly by a proper factorial representation? How can such representations be constructed from data? Are specific generative representations, like grammars or L-systems, universal w.r.t. inducing arbitrary distributions? We will consider latent variable probabilistic models as a framework to address such...