Data Farming is a methodology and capability that makes use of high performance computing to run models many times. This capability gives modelers and their clients the enhanced ability to discover trends and outlier in results, do sensitivity studies, verify and validate over extended ranges of input parameters, and consider modeling and analyzing non-linear phenomena with characteristics that can not be precisely defined. As high performance computing, in the form of distributed computing capabilities and commodity node systems becomes more pervasive and cost effective, Data Farming can become more available to modelers. In this paper the authors summarize Data Farming and the processes and data architecture of Data Farming systems that make high performance computing readily available to modelers.
Gary E. Horne, Theodore E. Meyer