IP address lookup is challenging for high performance routers because it requires a longest matching prefix at speeds of up to 10 Gbps (OC-192). Existing solutions have poor update times or require large amounts of expensive high speed memory, and do not take into account the flexibility of ASICs or the structure of modern high speed memory technologies such as SDRAM and RAMBUS. In this paper, we present a family of IP lookup schemes using a data structure that compactly encodes large prefix tables. For example, one of our reference implementations requires only 110 Kbytes of memory for the 41,000 entry Mae East database). The schemes can be instantiated to require a maximum of 4-7 memory references which, together with a small amount of pipelining, allows wire speed forwarding at OC-192 (10 Gbps) rates. We also present a series of optimizations to the core algorithm that allows the memory access width of the algorithm to reduced at the cost of memory references or allocated memory.