Continue the training of a DGP emulatorSource:
This function implements additional training iterations for a DGP emulator.
an instance of the
additional number of iterations for the DGP emulator training. Defaults to
the number of cores/workers to be used to optimize GP components (in the same layer) at each M-step of the training. If set to
NULL, the number of cores is set to
(max physical cores available - 1). Only use multiple cores when there is a large number of GP components in different layers and optimization of GP components is computationally expensive. Defaults to
number of burnin steps for the ESS-within-Gibbs at each I-step of the training. Defaults to
a bool indicating if the progress bar will be printed during the training:
FALSE: the training progress bar will not be displayed.
TRUE: the training progress bar will be displayed.
the number of training iterations to be discarded for point estimates calculation. Must be smaller than the overall training iterations so-far implemented. If this is not specified, only the last 25% of iterations are used. This overrides the value of
dgp(). Defaults to
the number of imputations to produce the predictions. Increase the value to account for more imputation uncertainties. This overrides the value of
NULL. Defaults to
See further examples and tutorials at https://mingdeyu.github.io/dgpsi-R/.