This paper introduces an approach to automatic basis function construction for Hierarchical Reinforcement Learning (HRL) tasks. We describe some considerations that arise when constructing basis functions for multilevel task hierarchies. We extend previous work on using Laplacian bases for value function approximation to situations where the agent is provided with a multi-level action hierarchy. We experimentally evaluate these techniques on the Taxi domain.