A new method is proposed to estimate the nonlinear functions in an additive regression model. Usually, these functions are estimated by penalized least squares, penalizing the curvatures of the functions. The new method penalizes the slopes as well, which is the type of penalization used in ridge regression for linear models. Tuning (or smoothing) parameters are estimated by permuted leavek-out cross-validation. The prediction performance of various methods is compared by a simulation experiment: penalizing both slope and curvature is either better than or as good as penalizing curvature only.