Autonomous agents interacting in an open world can be considered to be primarily driven by self interests. Previous work in this area has prescribed a strategy of reciprocal behavior, based on past interactions, for promoting and sustaining cooperation among such self-interested agents. Here we present a new mechanism where agents base their decisions both on historical data as well as on future interaction expectations. A decision mechanism is presented that compares current helping cost with expected future savings from interaction with the agent requesting help. We experiment with heterogeneous agents that have varying expertise for different job types. We evaluate the effect of both change of agent expertise and distribution of task types on subsequent agent relationships. The reciprocity mechanism based on future expectations is found to be robust and flexible in adjusting to the environmental dynamics. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Dist...