Testing a database engine has been and continues to be a challenging task. The space of possible SQL queries along with their possible access paths is practically unbounded. Moreover, this space is continuously increasing in size as the feature set of modern DBMS systems expands with every product release. To tackle these problems, random query generator tools have been used to create large numbers of test cases. While such test case generators enable the creation of complex and syntactically correct SQL queries, they do not guarantee that the queries produced return results or exercise desired DBMS components. Very often the generated queries contain logical contradictions, - at the query optimization layer, failing to exercise the lower layers of the database engine (query optimization, query execution, access methods, etc.) In this paper we present a random test case generation technique, which provides solutions to the above problems. Our technique utilizes execution feedback, obt...