Regularization

Likelihood adaptively modified penalties

A new family of penalty functions, ie, adaptive to likelihood, is introduced for model selection in general regression models. It arises naturally through assuming certain types of prior distribution on the regression parameters. To study the …

Regularization after retention in ultrahigh dimensional linear regression models

In ultrahigh dimensional setting, independence screening has been both theoretically and empirically proved a useful variable selection framework with low computation cost. In this work, we propose a two-step framework by using marginal information …

Model Selection for High Dimensional Quadratic Regression via Regularization

Quadratic regression (QR) models naturally extend linear models by considering interaction effects between the covariates. To conduct model selection in QR, it is important to maintain the hierarchical model structure between main effects and …

Post selection shrinkage estimation for high-dimensional data analysis

In high-dimensional data settings where p ≫ n, many penalized regularization approaches were studied for simultaneous variable selection and estimation. However, with the existence of covariates with weak effect, many existing variable selection …

Regularization After Marginal Learning for Ultra-High Dimensional Regression Models

Regularization is a popular variable selection technique for high dimensional regression models. However, under the ultra-high dimensional setting, a direct application of the regularization methods tends to fail in terms of model selection …

Tuning-parameter selection in regularized estimations of large covariance matrices

Recently many regularized estimators of large covariance matrices have been proposed, and the tuning parameters in these estimators are usually selected via cross-validation. However, there is a lack of consensus on the number of folds for conducting …

Regularized principal components of heritability

In family studies with multiple continuous phenotypes, heritability can be conveniently evaluated through the so-called principal-component of heredity (PCH, for short; Ott and Rabinowitz in Hum Hered 49:106--111, 1999). Estimation of the PCH, …

A road to classification in high dimensional space: the regularized optimal affine discriminant

For high dimensional classification, it is well known that naively performing the Fisher discriminant rule leads to poor results due to diverging spectra and accumulation of noise. Therefore, researchers proposed independence rules to circumvent the …

Network exploration via the adaptive LASSO and SCAD penalties

Graphical models are frequently used to explore networks, such as genetic networks, among a set of variables. This is usually carried out via exploring the sparsity of the precision matrix of the variables under consideration. Penalized likelihood …