Defining Calibration Payoffs

In order to perform Calibration you will need to define a payoff measuring how model results compare to available data (sometimes called goodness of fit). To do so use the Payoff Specs which you open by first clicking on the Model Analysis Tools tab of the properties panel (with nothing selected in your model). Then click Payoff on the tabs that appear at the top.

.

Payoffs are based on the value of model variables compared to other values, typically data, available either in a run loaded externally (Load External Data dialog box) or against a time varying input (Import Data dialog box). It is also possible to compare against other model variables in the current run, though this is a less common usage.

When you make a comparison of a model variable to data, whether in an externally loaded run or imported as time varying values, the comparison is made only when a data point is available. There is no comparison between data points using interpolation. Comparison of interpolated values (as you would get, for example, with a graphical) is not appropriate for calibration. This is an important reason to bring the data in via loading or importing external data.

When the variable value is compared to data the comparison can either be done using a square or absolute value.

Squared Errors

Taking a square of the error is the most common approach to error computation and is consistent with an underlying assumption of a normally distributed error. This is the assumption underlying estimation methods such as ordinary least squares.

Absolute Errors

Taking a squared error means that big deviations from the data are penalized much more heavily than small deviations. Though this is often desirable, if you are comparing multiple variables it means that the data that fit the worst tend to dominate the parameter selection process. Using the absolute error lessens this dependency. Use of the absolute values of errors is not based on a specific assumption about the underlying distribution of errors, but is a fairly commonly used measurement of fit going under such names as the mean absolute deviation.

Weights

When you are considering the behavior of several different variables in the computation of the payoff it is necessary to make the values commensurate. This is done using weights. The weights can be specified directly, or they can be computed automatically. When computed automatically the weights are selected so that each variable contributes approximately the number of data points used in its comparison to the payoff value. For squared errors, which makes the payoff the same as (2x) the negative of the log likelihood (ignoring a constant term), making it easier to interpret changes. For absolute error computation this just gives a basis for looking at the payoff value and its changes.

Note It is generally easiest to allow the software to compute the weights. The only time this is not preferable is when there is some measurement beyond the calibration that gives you information about what they should be (based on the underlying statistics of the data generating process).