GP#
- class luas.GP(kf: Kernel, x_l: Array, x_t: Array, mf: Callable | None = None, logPrior: Callable | None = None, jit: bool | None = True)[source]#
Bases:
object
Gaussian process class specialised to make the analysis of 2D data sets simple and efficient. Can be used with any
Kernel
such asLuasKernel
to perform the required log-likelihood calculations in addition to performing GP regression. Support for calculating Laplace approximations of the posterior with respect to the input parameters is also provided as it can be very useful when sampling large numbers of parameters (such as for tuning the MCMC).Must have two separate input dimensions which are used to compute the covariance matrix and may additionally be used to compute the deterministic mean function. The observed data
Y
is assumed to have shape(N_l, N_t)
and will be a parameter of most methods.The first input
x_l
is the wavelength/vertical dimension of the observed dataY
and is expected to have shape(N_l,)
or(d_l, N_l)
where N_l is the number of rows ofY
andd_l
is the number of regression variables along the wavelength/vertical dimension used for kernel inputs and/or mean function inputs. Similarlyx_t
is assumed to lie along the time/horizontal dimension of the observed data with shape(N_t,)
or(d_t, N_t)
whereN_t
is the number of columns ofY
andd_t
is the number of regression variables along the column dimension used as kernel inputs and/or mean function inputs.- Parameters:
kf (Kernel) – Kernel object which has already been initialised with the desired kernel function.
x_l (JAXArray) – Array containing wavelength/vertical dimension regression variable(s). May be of shape
(N_l,)
or(d_l,N_l)
ford_l
different wavelength/vertical regression variables.x_t (JAXArray) – Array containing time/horizontal dimension regression variable(s). May be of shape
(N_t,)
or(d_t,N_t)
ford_t
different time/horizontal regression variables.mf (Callable, optional) – The deterministic mean function, by default returns a JAXArray of zeros. Needs to be in the format
mf(params, x_l, x_t)
and returns a JAXArray of shape(N_l, N_t)
. matching the shape of the observed data Y.logPrior (Callable, optional) – Log prior function, by default returns zero. Needs to be in the format
logPrior(params)
and return a scalar.jit (bool, optional) – Whether to
jax.jit
compile the likelihood, posterior and GP prediction functions. If set toFalse
then mean functions not written inJAX
are supported and can still be used withPyMC
(but notNumPyro
which requires JIT compilation). Defaults toTrue
.
Calculate a PyTree of stored values from the decomposition of the covariance matrix. |
|
|
Computes the log likelihood without returning any stored values from the decomposition of the covariance matrix. |
|
Computes the log likelihood without returning any stored values from the decomposition of the covariance matrix. |
|
Computes the log posterior without returning any stored values from the decomposition of the covariance matrix. |
|
Computes the log posterior without returning any stored values from the decomposition of the covariance matrix. |
|
Performs GP regression and replaces any outliers above a given number of standard deviations with the GP predictive mean evaluated at those locations. |
|
Visualises the fit to the data. |
|
Performs a quick (and approximate) 2D autocorrelation using |
|
Computes the Laplace approximation at the location of |
|
Computes the Laplace approximation at the location of |
|
Performs GP regression and computes the GP predictive mean and the GP predictive uncertainty as the standard devation at each location or else can return the full covariance matrix. |
|
Computes the log likelihood and also returns any stored values from the decomposition of the covariance matrix. |
|
Computes the log likelihood and also returns any stored values from the decomposition of the covariance matrix. |
|
Computes the log posterior and also returns any stored values from the decomposition of the covariance matrix. |
|
Computes the log posterior and also returns any stored values from the decomposition of the covariance matrix. |