Removed unneeded legacy fortran code, leaving only coxnet. Fixed up Matrix as() sequences

Relatively minor changes to bugs in survival functions and bigGlm, and some improved failure messages.

Most of the Fortran code has been replaced by C++ by James Yang, leading to speedups in all cases. The exception is the Cox routine for right censored data, which is still under development.

Some of the Fortran in glmnet has been replaced by C++, written by
the newest member of our team, James Yang. * the `wls`

routines (dense and sparse), that are the engines under the
`glmnet.path`

function when we use programmable families, are
now written in C++, and lead to speedups of around 8x. * the family of
elnet routines (sparse/dense, covariance/naive) for
`glmnet(...,family="gaussian")`

are all in C++, and lead to
speedups around 4x.

A new feature added, as well as some minor fixes to documentation. *
The exclude argument has come to life. Users can now pass a function
that can take arguments x, y and weights, or a subset of these, for
filtering variables. Details in documentation and vignette. * Prediction
with single `newx`

observation failed before. This is fixed.
* Labeling of predictions from `cv.glmnet`

improved. * Fixed
a bug in mortran/fortran that caused program to loop ad infinitum

Fixed some bugs in the coxpath function to do with sparse X. * when
some penalty factors are zero, and X is sparse, we should not call GLM
to get the start * apply does not work as intended with sparse X, so we
now use matrix multiplies instead in computing lambda_max * added
documentation for `cv.glmnet`

to explain implications of
supplying `lambda`

Expanded scope for the Cox model. * We now allow (start, stop) data
in addition to the original right-censored all start at zero option. *
Allow for strata as in `survival::coxph`

* Allow for sparse X
matrix with Cox models (was not available before) * Provide method for
`survival::survfit`

Vignettes are revised and reorganized. Additional index information
stored on `cv.glmnet`

objects, and included when printed.

- Biggest change. Cindex and auc calculations now use the
`concordance`

function from package`survival`

- Minor changes. Allow coefficient warm starts for glmnet.fit. The print method for glmnet now really prints %Dev rather than the fraction.

Major revision with added functionality. Any GLM family can be used
now with `glmnet`

, not just the built-in families. By passing
a “family” object as the family argument (rather than a character
string), one gets access to all families supported by `glm`

.
This development was programmed by our newest member of the
`glmnet`

team, Kenneth Tay.

Bug fixes

`Intercept=FALSE`

with “Gaussian” is fixed. The`dev.ratio`

comes out correctly now. The mortran code was changed directly in 4 places. look for “standard”. Thanks to Kenneth Tay.

Bug fixes

`confusion.glmnet`

was sometimes not returning a list because of apply collapsing structure`cv.mrelnet`

and`cv.multnet`

dropping dimensions inappropriately- Fix to
`storePB`

to avoid segfault. Thanks Tomas Kalibera! - Changed the help for
`assess.glmnet`

and cousins to be more helpful! - Changed some logic in
`lambda.interp`

to avoid edge cases (thanks David Keplinger)

Minor fix to correct Depends in the DESCRIPTION to R (>= 3.6.0)

This is a major revision with much added functionality, listed
roughly in order of importance. An additional vignette called
`relax`

is supplied to describe the usage.

`relax`

argument added to`glmnet`

. This causes the models in the path to be refit without regularization. The resulting object inherits from class`glmnet`

, and has an additional component, itself a glmnet object, which is the relaxed fit.`relax`

argument to`cv.glmnet`

. This allows selection from a mixture of the relaxed fit and the regular fit. The mixture is governed by an argument`gamma`

with a default of 5 values between 0 and 1.

`predict`

,`coef`

and`plot`

methods for`relaxed`

and`cv.relaxed`

objects.`print`

method for`relaxed`

object, and new`print`

methods for`cv.glmnet`

and`cv.relaxed`

objects.- A progress bar is provided via an additional argument
`trace.it=TRUE`

to`glmnet`

and`cv.glmnet`

. This can also be set for the session via`glmnet.control`

.

- Three new functions
`assess.glmnet`

,`roc.glmnet`

and`confusion.glmnet`

for displaying the performance of models. `makeX`

for building the`x`

matrix for input to`glmnet`

. Main functionality is*one-hot-encoding*of factor variables, treatment of`NA`

and creating sparse inputs.`bigGlm`

for fitting the GLMs of`glmnet`

unpenalized.

In addition to these new features, some of the code in
`glmnet`

has been tidied up, especially related to CV.

- Fixed a bug in internal function
`coxnet.deviance`

to do with input`pred`

, as well as saturated`loglike`

(missing) and weights - added a
`coxgrad`

function for computing the gradient

- Fixed a bug in coxnet to do with ties between death set and risk set

- Added an option alignment to
`cv.glmnet`

, for cases when wierd things happen

- Further fixes to mortran to get clean fortran; current mortran src
is in
`inst/mortran`

- Additional fixes to mortran; current mortran src is in
`inst/mortran`

- Mortran uses double precision, and variables are initialized to
avoid
`-Wall`

warnings - cleaned up repeat code in CV by creating a utility function

- Fixed up the mortran so that generic fortran compiler can run without any configure

- Cleaned up some bugs to do with exact prediction
`newoffset`

created problems all over - fixed these

- Added protection with
`exact=TRUE`

calls to`coef`

and`predict`

. See help file for more details

- Two iterations to fix to fix native fortran registration.

- included native registration of fortran

- constant
`y`

blows up`elnet`

; error trap included - fixed
`lambda.interp`

which was returning`NaN`

under degenerate circumstances.

- added some code to extract time and status gracefully from a
`Surv`

object

- changed the usage of
`predict`

and`coef`

with`exact=TRUE`

. The user is strongly encouraged to supply the original`x`

and`y`

values, as well as any other data such as weights that were used in the original fit.

- Major upgrade to CV; let each model use its own lambdas, then predict at original set.
- fixed some minor bugs

- fixed subsetting bug in
`lognet`

when some weights are zero and`x`

is sparse

- fixed bug in multivariate response model (uninitialized variable), leading to valgrind issues
- fixed issue with multinomial response matrix and zeros
- Added a link to a glmnet vignette

- fixed bug in
`predict.glmnet`

,`predict.multnet`

and`predict.coxnet`

, when`s=`

argument is used with a vector of values. It was not doing the matrix multiply correctly - changed documentation of glmnet to explain logistic response matrix

- added parallel capabilities, and fixed some minor bugs

- added
`intercept`

option

- added upper and lower bounds for coefficients
- added
`glmnet.control`

for setting systems parameters - fixed serious bug in
`coxnet`

- added
`exact=TRUE`

option for prediction and coef functions

- Major new release
- added
`mgaussian`

family for multivariate response - added
`grouped`

option for multinomial family

- nasty bug fixed in fortran - removed reference to dble
- check class of
`newx`

and make`dgCmatrix`

if sparse

`lognet`

added a classnames component to the object`predict.lognet(type="class")`

now returns a character vector/matrix

`predict.glmnet`

: fixed bug with`type="nonzero"`

`glmnet`

: Now x can inherit from`sparseMatrix`

rather than the very specific`dgCMatrix`

, and this will trigger sparse mode for glmnet

`glmnet.Rd`

(`lambda.min`

) : changed value to 0.01 if`nobs < nvars`

, (`lambda`

) added warnings to avoid single value, (`lambda.min`

): renamed it`lambda.min.ratio`

`glmnet`

(`lambda.min`

) : changed value to 0.01 if`nobs < nvars`

(`HessianExact`

) : changed the sense (it was wrong), (`lambda.min`

): renamed it`lambda.min.ratio`

. This allows it to be called`lambda.min`

in a call though`predict.cv.glmnet`

(new function) : makes predictions directly from the saved`glmnet`

object on the cv object`coef.cv.glmnet`

(new function) : as above`predict.cv.glmnet.Rd`

: help functions for the above`cv.glmnet`

: insert`drop(y)`

to avoid 1 column matrices; now include a`glmnet.fit`

object for later predictions`nonzeroCoef`

: added a special case for a single variable in`x`

; it was dying on this`deviance.glmnet`

: included`deviance.glmnet.Rd`

: included

- Note that this starts from version
`glmnet_1.4`

.