Shaping functions can be used in multi-task reinforcement learning (RL) to incorporate knowledge from previously experienced tasks to speed up learning on a new task. So far, researchers have pre-specified a separate representation for shaping and value functions in multi-task settings. However, no work has made precise what distinguishes these representations, or what makes a good representation for either function. This paper shows two alternative methods by which an evolutionary algorithm can find a shaping function in multi-task RL without pre-specifying a separate representation. The second method, which uses an indirect fitness measure, is demonstrated to achieve similar performance to the first against a significantly lower computational cost. In addition, we define a formal categorisation of representations that makes precise what makes a good representation for shaping and value functions. We validate the categorisation with an evolutionary feature selection method and show t...