As vision algorithms mature with increasing inspiration from the learning community, statistically independent pseudo random number generation (PRNG) becomes increasingly important. At the same time, execution time demands have seen algorithms being implemented on evolving parallel hardware such as GPUs. The Mersenne Twister (MT) [7] has proven to be the current state of the art for generating high quality random numbers, and the Nvidia provided software for parallel MT is in widespread use. While execution time is important, development time is also critical. As processor cardinality changes, a foundation for generating simulations that will vary only in execution time and not in the actual result is useful; otherwise the development time will be impacted. In this paper, we present an implementation of the Lagged Fibonacci Generator (LFG) considered to be of quality equal [7] to MT on the GPU. Unlike MT, LFG has this important processor-cardinality agnostic capability