Prediction models play a paramount role in various fields such as psychology and medicine, where the aim is to maximize predictive performance while ensuring high interpretability and stability....Show morePrediction models play a paramount role in various fields such as psychology and medicine, where the aim is to maximize predictive performance while ensuring high interpretability and stability. Prediction rule ensembles are a recent statistical learning method that address the black-box problem from common machine learning methods. First, an ensemble of trees is fitted, and by employing sparse regression, such as the lasso, only a subset of those trees is retained in the final ensemble, enhancing interpretability. However, the lasso suffers from drawbacks, considering that the optimal penalty parameter for variable selection may lead to an over-shrinkage of large coefficients. This study investigates if accuracy, sparsity, and stability of prediction rule ensembles can be improved by using the adaptive or relaxed lasso, or their combination. In the adaptive lasso, weight parameters are assigned to each coefficient in the penalty term, while in the relaxed lasso the lasso coefficients are debiased towards unpenalized values. In addition, in this study we compared if the results differ if the model selection was based on the lambda- 1se or lambda-min criterion and between continuous and binary outcomes. For this, the models were evaluated on nine benchmark datasets using repeated 10-fold cross-validation. The results show that all lasso variations improve model sparsity significantly while maintaining high accuracy, but at the cost of stability. The relaxed and adaptive lasso select sparser models than the standard lasso while achieving good stability of variable selection, but at the cost of less stable predictions. The relaxed adaptive lasso yields the sparsest model, but is the most unstable. Regarding lambda criterion, for continuous outcomes the lambda-minimum criterion leads to highly unstable results and diminishes the effect of lasso approach used. For binary outcomes, the lambda-1se criterion only improves accuracy and sparsity, but not stability, while for continuous outcomes it improves all performance diagnostics.Show less