plgp: Particle Learning of Gaussian Processes

Sequential Monte Carlo inference for fully Bayesian Gaussian process (GP) regression and classification models by particle learning (PL). The sequential nature of inference and the active learning (AL) hooks provided facilitate thrifty sequential design (by entropy) and optimization (by improvement) for classification and regression models, respectively. This package essentially provides a generic PL interface, and functions (arguments to the interface) which implement the GP models and AL heuristics. Functions for a special, linked, regression/classification GP model and an integrated expected conditional improvement (IECI) statistic is provides for optimization in the presence of unknown constraints. Separable and isotropic Gaussian, and single-index correlation functions are supported. See the examples section of ?plgp and demo(package="plgp") for an index of demos

Version: 1.1-5
Depends: R (≥ 2.4), mvtnorm, tgp
Suggests: akima, ellipse, splancs
Published: 2012-07-24
Author: Robert B. Gramacy
Maintainer: Robert B. Gramacy <rbgramacy at chicagobooth.edu>
License: LGPL-2 | LGPL-2.1 | LGPL-3 [expanded from: LGPL] (see file LICENSE)
URL: http://faculty.chicagobooth.edu/robert.gramacy/plgp.html
NeedsCompilation: yes
Materials: ChangeLog
In views: ExperimentalDesign
CRAN checks: plgp results

Downloads:

Reference manual: plgp.pdf
Package source: plgp_1.1-5.tar.gz
OS X binary: plgp_1.1-5.tgz
Windows binary: plgp_1.1-5.zip
Old sources: plgp archive

Reverse dependencies:

Reverse suggests: dynaTree, reglogit