In many negotiation and bargaining scenarios, a particular agent may need to interact repeatedly with another agent. Typically, these interactions take place under incomplete information, i.e., an agent does not know exactly which offers may be acceptable to its opponent or what other outside options are available to that other agent. In such situations, an agent can benefit by learning its opponent’s decision model based on its past experience. In particular, being able to accurately predict opponent decisions can enable an agent to generate offers to optimize its own utility. In this paper, we present a learning mechanism using Chebychev’s polynomials by which an agent can approximately model the decision function used by the other agent based on the decision history of its opponent. We study a repeated one-shot negotiation model which incorporates uncertainty about opponent’s valuation and outside options. We evaluate the proposed modeling mechanism for optimizing agent util...