To design virtual agents that simulate humans in repeated decision making under uncertainty, we seek to quantitatively characterize the actual human behavior in these settings. We collect our data from 800 real human subjects through a large-scale randomized online experiment. We evaluate the performance of a wide range of computational models in fitting the data by both conducting a scalable search through the space of two-component models (i.e. inference + selection model) and investigating a few rules of thumb. Our results suggest that across different decision-making environment, an average human decision maker can be best described by a two-component model, which is composed of an inference model that relies heavily on more recent information (i.e. displays recency bias) and a selection model which assumes cost-proportional errors and reluctance to change in subsequent trials (i.e. displays status-quo bias). Additionally, while a large portion of individuals behave like the ave...