plot_cross_validation_metric.Rd 1.4 KB

12345678910111213141516171819202122232425262728293031323334353637
  1. % Generated by roxygen2: do not edit by hand
  2. % Please edit documentation in R/plot.R
  3. \name{plot_cross_validation_metric}
  4. \alias{plot_cross_validation_metric}
  5. \title{Plot a performance metric vs. forecast horizon from cross validation.
  6. Cross validation produces a collection of out-of-sample model predictions
  7. that can be compared to actual values, at a range of different horizons
  8. (distance from the cutoff). This computes a specified performance metric
  9. for each prediction, and aggregated over a rolling window with horizon.}
  10. \usage{
  11. plot_cross_validation_metric(df_cv, metric, rolling_window = 0.1)
  12. }
  13. \arguments{
  14. \item{df_cv}{The output from fbprophet.diagnostics.cross_validation.}
  15. \item{metric}{Metric name, one of 'mse', 'rmse', 'mae', 'mape', 'coverage'.}
  16. \item{rolling_window}{Proportion of data to use for rolling average of
  17. metric. In [0, 1]. Defaults to 0.1.}
  18. }
  19. \value{
  20. A ggplot2 plot.
  21. }
  22. \description{
  23. This uses fbprophet.diagnostics.performance_metrics to compute the metrics.
  24. Valid values of metric are 'mse', 'rmse', 'mae', 'mape', and 'coverage'.
  25. }
  26. \details{
  27. rolling_window is the proportion of data included in the rolling window of
  28. aggregation. The default value of 0.1 means 10% of data are included in the
  29. aggregation for computing the metric.
  30. As a concrete example, if metric='mse', then this plot will show the
  31. squared error for each cross validation prediction, along with the MSE
  32. averaged over rolling windows of 10% of the data.
  33. }