1234567891011121314151617181920212223242526272829303132333435363738394041424344454647 |
- % Generated by roxygen2: do not edit by hand
- % Please edit documentation in R/diagnostics.R
- \name{performance_metrics}
- \alias{performance_metrics}
- \title{Compute performance metrics from cross-validation results.}
- \usage{
- performance_metrics(df, metrics = NULL, rolling_window = 0.1)
- }
- \arguments{
- \item{df}{The dataframe returned by cross_validation.}
- \item{metrics}{An array of performance metrics to compute. If not provided,
- will use c('mse', 'rmse', 'mae', 'mape', 'coverage').}
- \item{rolling_window}{Proportion of data to use in each rolling window for
- computing the metrics. Should be in [0, 1].}
- }
- \value{
- A dataframe with a column for each metric, and column 'horizon'.
- }
- \description{
- Computes a suite of performance metrics on the output of cross-validation.
- By default the following metrics are included:
- 'mse': mean squared error
- 'rmse': root mean squared error
- 'mae': mean absolute error
- 'mape': mean percent error
- 'coverage': coverage of the upper and lower intervals
- }
- \details{
- A subset of these can be specified by passing a list of names as the
- `metrics` argument.
- Metrics are calculated over a rolling window of cross validation
- predictions, after sorting by horizon. The size of that window (number of
- simulated forecast points) is determined by the rolling_window argument,
- which specifies a proportion of simulated forecast points to include in
- each window. rolling_window=0 will compute it separately for each simulated
- forecast point (i.e., 'mse' will actually be squared error with no mean).
- The default of rolling_window=0.1 will use 10% of the rows in df in each
- window. rolling_window=1 will compute the metric across all simulated
- forecast points. The results are set to the right edge of the window.
- The output is a dataframe containing column 'horizon' along with columns
- for each of the metrics computed.
- }
|