In this paper, we develop an efficient logistic regression model for multiple instance learning that combines L1 and L2 regularisation techniques. An L1 regularised logistic regression model is first learned to find out the sparse pattern of the features. To train the L1 model efficiently, we employ a convex differentiable approximation of the L1 cost function which can be solved by a quasi Newton method. We then train an L2 regularised logistic regression model only on the subset of features with nonzero weights returned by the L1 logistic regression. Experimental results demonstrate the utility and efficiency of the proposed approach compared to a number of alternatives.