The Stochastic Volatility Model, Regime Switching and Value-at-Risk (VaR) in International Equity Markets

Show more

1. Introduction

Volatility is a key ingredient for derivative pricing, portfolio optimization and value-at-risk analysis. Hence, accurate estimates and good modeling of stock price volatility are of central interest in financial applications. The valuation of financial instruments is complicated by two characteristics of the volatility process. First, it is generally acknowledged that the volatility of many financial return series is not constant over time and exhibits prolonged periods of high and low volatility, often referred to as volatility clustering [1] [2] . Second, volatility is not directly observable^{1}. Two models have been developed which capture this time-varying autocorrelated volatility process: the GARCH and the Stochastic Volatility (SV) model. GARCH models define the time-varying variance as a deterministic function of past squared innovations and lagged conditional variances whereas the variance in the Stochastic Volatility model is modeled as an unobserved component that follows some stochastic process. Stochastic volatility models are also attractive because they are close to the models often used in financial theory to represent the behavior of financial prices. Furthermore, their statistical properties are easy to derive using well-known results on log-normal distributions. Finally, compared with the more popular GARCH models, they capture the main empirical properties often observed in daily series of financial returns (see, for example, Carnero et al., [23] ). For surveys on the extensive GARCH literature we refer to Bollerslev et al. [5] , Bera and Higgins [6] and Bollerslev et al. [7] and for stochastic volatility we refer to Taylor [8] , Ghysels et al. [9] Shephard [10] , and Broto and Ruiz [11] . Both models are defined by their first and second moments. The Stochastic Volatility model introduced by Taylor [8] provides an alternative to the GARCH model in accounting for the time- varying and persistent volatility as well as for the leptokurtosis in financial return series. The stochastic volatility models present two main advantages over ARCH models. The first one is their solid theoretical background, as they can be interpreted as discretized versions of stochastic volatility continuous-time models put forward by modern finance theory (see Hull and White [12] ). The second is their ability to generalize from univariate to multivariate series, as far as their estimation and interpretation are concerned. On the other hand, stochastic volatility models are more difficult to estimate than ARCH models, due to the fact that it is not easy to derive their exact likelihood function. For this reason, a number of econometric methods have been proposed to solve the problem of estimation of stochastic volatility models.

^{1}For a comprehensive review of volatility measures and their properties see Andersen, Bollerslev and Diebold [3] and for forecasting financial volatility see the survey by Poon and Granger [4] .

The stochastic volatility model defines volatility as a logarithmic first-order autoregressive process. It is an alternative to the GARCH models which have relied on simultaneous modeling of the first and second moment. For certain financial time series such as stock index return, which have been shown to display high positive first-order autocorrelations, this constitutes an improvement in terms of efficiency; see Campbell et al. [13] . The volatility of daily stock index returns has been estimated with stochastic volatility models but usually results have relied on extensive pre-modeling of these series, thus avoiding the problem of simultaneous estimation of the mean and variance. Koopman and Hol Uspensky [14] proposed the Stochastic Volatility in Mean model (SVM) that in- corporates volatility as one of the determinants of the mean. This modification makes the model suitable for empirical applications between the mean and variance of returns. The SVM model can be viewed as the SV counterpart of the ARCH-M model of Engle et al. [2] with the main difference between the two models is that the ARCH-M model intends to estimate the relationship between expected returns and expected volatility, whereas the aim of the SVM model is to simultaneously estimate the ex ante relation between returns and volatility and the volatility feedback effect.

Another way of modeling financial time series is to define different states of the world or regimes, and to allow for the possibility that the dynamic behavior of financial variables to depend on the regime that occurs at any given point in time. That means that certain properties of the time series, such as its mean, variance and/or autocorrelation, are different in different regimes. Regime switching models were first introduced by Goldfeld and Quandt [15] to provide a simple way to model endogenously determined structural breaks or regime shifts in parameters. Hamilton [16] generalizes this setting by allowing the mixing probability to be time-varying function of the history of the data. To illustrate the importance of stochastic regime switching for financial time series, for example, LeBaron [18] shows that the autocorrelations of stock returns are related to the level of volatility of these returns. In particular, autocorrelations tend to be larger during periods of low volatility and smaller during periods of high volatility. The periods of low and high volatility can be interpreted as distinct regime―or, put differently, the level of volatility can be regarded as the regime-determining process. In this setup, the level of volatility is not known with certainty and what we can do is to make a sensible forecast of this level, and hence, of the regimes that will occur in the future, by assigning probabilities to the occurrence of the different regimes.

Markov switching models have been found to provide a flexible framework to handle many features of asset returns. In particular, they allow for nonlinearities arising from persistent jumps in the model parameters and have several appealing features. First, they provide a convenient framework to endogenously identify regime shifts that are commonplace in financial data. Regimes are treated as latent processes which are not observable, but can be inferred from the estimation algorithm using observable data, such as the history of the asset’s returns. Second, as Markov switching models belong to the mixture-of-distributions class of stochastic processes, they are as versatile as mixture models in capturing salient features of financial data such as time-varying volatilities, skewness, and leptorkurtosis. A detailed study of the statistical properties of Markov switching models by Timmerman [18] shows the Markov switching models can indeed approximate general classes of density functions with a wide range of conditional moments. Ang and Bekaert [19] show that Markov switching models with state-dependent means and variances can match exceedance correlations better than do standard GARCH models or bivariate jump diffusion processes.

Related to the two models, returns on equity markets were also found to be characterized by jumps, and these jumps tend to occur at the same time across countries, implying that conditional correlations between international equity returns tend to be higher in periods of high market volatility or following large downside moves. Evidence on jumps is provided by Jorion [20] , Akgiray and Booth [21] , Bates [22] , and Bekaert et al. [23] , and Asgharian and Bengtsson [24] ^{2}. For example, Asgharian and Bengtsson [24] studied the jump spillover between equity indexes using a Bayesian approach and found the probabilities that jumps in large countries cause jumps or large returns in other countries. They also found significant evidence of jump spillover, particularly large between countries that belong to the same regions and have similar industry structure^{3}.

^{2}For evidence on changing conditional correlations see, for instance Ang and Chen [25] , Longin and Solnik [26] , Karolyi and Stulz [27] , and Chakrabarti and Roll [28] .

^{3}Other studies using copula functions were used to study diversification benefits and dependence between American and developed markets as done by Chollete et al. [29] , and Buraschi et al. [30] .

^{4}Our result fall in line of those conducted by Kuester et al. [31] . Yet their study is made only on the NASDAQ, while ours cover more markets and a larger sample period.

In this paper, we extend on the existing literature by modeling the international equity markets according to two volatility models: the log-normal SV model and the two-regime switching model. The log-normal SV model will be is estimated by quasi-maximum likelihood with the kalman filter while the two- regime switching model will be estimated by maximum likelihood with the Hamilton filter. The results provide new evidence on the dynamics of risk and return in equity markets with the possible existence of regimes in these markets. Then based on the one-day-ahead forecasted conditional volatility from each model, we calculate the one day Value-at-Risk (VaR). Then, we backtest those results from each model using unconditional and conditional tests. We find that the value at risk estimates are higher for the SV model than those obtained under the regime-switching model for all markets and over all horizons. The exception is for the Japanese market. The stochastic volatility model generates lower VaR values than those of the regime switching model. A characteristic that reflects the performance of the Japanese market during the sample period, when Japan was hit by a real estate bubble and a banking crisis that made the volatility in that market lower than those observed in other markets. Then, considering the value at risk measures obtained directly from the two models and comparing them to those obtained from the unconditional return distribution, the two models provide smaller value at risk measures. Finally, comparing how the Value-at-Risk behaves with the time horizon, value at risk measures increase more slowly with horizon under the regime switching model than those obtained under the stochastic volatility model^{4}. The performance of both models are then backtested using conditional and unconditional tests and we find that the Canadian equity market represented by the S & P/TSX performs the worst among all markets, while the DAX seems to be better modeled by the stochastic volatility model as opposed to that of the regime switching model.

Our results deviate from those obtained by the above mentioned literature in the following aspects: 1) the sample size is longer than previously studied; 2) the previous literature either focuses on one single market or few (i.e, Kuester et al. [31] ) we provide a forecasted one day ahead volatility based on each model and consequently use that to calculate value at risk measures; and finally 4) we find that the Canadian and Japanese markets appear to have different features than those obtained in previous results, whether in terms of the risk measures obtained by the two models or in terms of the suitability of each model when we backtest them.

The paper is organized as follows. Section 2 introduces the two models: the regime switching model and the stochastic volatility model. Section 3 describes the available data and presents the stylized facts of the corresponding realized volatility. Section 4 presents the estimation results from the two models. Section 5 provides the Value at Risk measures and backtesting results. Section 5 concludes.

2. Models of Volatility

The empirical regularities of asset returns (i.e., volatility clustering; squared returns exhibit prolonged serial correlation; and heavy tails and persistence of volatility) suggest that the behavior of financial time series can be captured by a model which recognizes the time-varying nature of return volatility as follows:

${y}_{t}={\mu}_{t}+{\sigma}_{t}{\epsilon}_{t}$ (1)

${\mu}_{t}=a+{\displaystyle \underset{i=1}{\overset{k}{\sum}}{b}_{i}{x}_{i,1}}$ (2)

with ${\epsilon}_{t}$ follows NID(0, 1). ${\mu}_{t}$ represents the mean and depends on a constant a and regression coefficients ${b}_{1},\cdots ,{b}_{k}$ . The explanatory variables ${x}_{1,t},\cdots ,{x}_{k,t}$ may also contain lagged exogenous and dependent variables. The disturbance term ${\epsilon}_{t}$ is IID with zero mean and unit variance and a usual assumption of a normal distribution.

Following Shephard [10] , models of changing volatility can be usefully partitioned into observation-driven and parameter-driven models and both can be expressed using a parametric framework as: ${y}_{t}/{z}_{t}$ follows a $N\left({\mu}_{t},{\sigma}_{t}^{2}\right)$ . In the first class, the autoregressive heteroskedasticity (ARCH) models introduced by Engle [32] are the most representative example. In the second class, ${z}_{t}$ is a function of an unobserved or latent component. The log-normal stochastic volatility model created by Taylor [8] is the simplest and best known example:

${h}_{t}=\alpha +\beta {h}_{t-1}+{\eta}_{t}$ (3)

with
${y}_{t}/{z}_{t}$ following a N(0, exp(h_{t})) and η_{t} being
$NID\left(0,{\sigma}_{\eta}^{2}\right)$ .

Where h_{t} represents the log-volatility, which is unobserved but can be estimated using the observations. One interpretation for the latent h_{t} is to represent the random and uneven flow of new information, which is difficult to model directly, into financial markets. The most popular model from Taylor [8] , puts

${y}_{t}={\epsilon}_{t}\mathrm{exp}\left({h}_{t}/2\right)$ and ${h}_{t}=\alpha +\beta {h}_{t-1}+{\eta}_{t}$ (4)

where ε_{t} and η_{t} are two independent Gaussian white noises, with variances 1 and
${\sigma}_{\eta}^{2}$ , respectively. Due to the Gaussianity of η_{t}, this model is called a log-normal SV model. Although the assumption of Gaussianity of η_{t} can seem ad hoc at first sight, Andersen et al. [33] [34] show that the log-volatility process can be well approximated by a Normal distribution.

Another possible interpretation for h_{t} is to characterize the regime in which financial markets are operating and then it could be described by a discrete valued variable. The most popular approach to modelling changes in regime is the class of Markov switching models introduced by Hamilton [16] . In that case the model is where
${y}_{t}={\epsilon}_{t}\mathrm{exp}\left({h}_{t}/2\right)$ and
${h}_{t}=\alpha +\beta {s}_{t}$ where s_{t} is a two state first- order Markov chain which can take values 0, 1 and is independent of ε_{t}. The value of the time series s_{t}, for all t, depends only on the last value s_{t}_{−1} for i, j = 0, 1:

$P\left({s}_{t}=j|{s}_{t-1}=i,{s}_{t-2}=i,\cdots \right)=P\left({s}_{t}=j|{s}_{t-1}=i\right)={p}_{ij}$ (5)

The probabilities ${\left({p}_{ij}\right)}_{i,j=0,1}$ are called transition probabilities of moving from one state to the other. These transition probabilities are collected in the transition matrix P:

$\left[\frac{{p}_{00}}{1-{p}_{00}}\frac{1-{p}_{11}}{{p}_{11}}\right]$ (6)

which fully describes the Markov chain and also we get: ${p}_{00}+{p}_{01}={p}_{10}+{p}_{11}=1$ . A two-state Markov chain can be represented by a simple AR(1) process as follows:

${s}_{t}=\left(1-{p}_{00}\right)+\left(-1+{p}_{00}+{p}_{11}\right){s}_{t-1}+{\upsilon}_{t}$ (7)

where ${\upsilon}_{t}={s}_{t}-E\left({s}_{t}|{s}_{t-1},{s}_{t-2},\cdots \right)$ and the volatility equation can be written the following way:

${h}_{t}=\alpha +\beta {s}_{t}=\alpha +\beta \left[\left(1-{p}_{00}\right)+\left(-1+{p}_{00}+{p}_{11}\right){s}_{t-1}+{\upsilon}_{t}\right]$ (8)

or

$\begin{array}{c}{h}_{t}=\left(2-{p}_{00}-{p}_{11}\right)+\beta \left(1-{p}_{00}\right)+\left(-1+{p}_{00}+{p}_{11}\right){h}_{t-1}+\beta {\upsilon}_{t}\\ =a+b{h}_{t-1}+{\varpi}_{t}\end{array}$ (9)

which implies the same structure of the stochastic volatility model but with a noise that can take only a finite set of values.

3. Estimation Methods

A variety of estimation procedures has been proposed for the stochastic volatility models, including for example the Generalized Method of Moments (GMM) used by Melino and Turnbull [35] , the Quasi Maximum Likelihood (QML) approach followed by Harvey et al. [36] and Ruiz [37] , the Efficient Method of Moments (EMM) applied by Gallant et al. [38] , and Markov-Chain Monte Carlo (MCMC) procedures used by Jacquier et al. [39] and Kim et al. [40] . In this paper, the parameters of the SV model are estimated by the exact maximum likelihood method using Monte Carlo importance sampling techniques. We refer the reader to Koopman and Hol Uspensky [14] for more explanation. The likelihood function for the SV model can be constructed using simulation methods developed by Shephard and Pitt [41] and Durbin and Koopman [42] . For the SV model we can express the likelihood function as:

$L\left(\psi \right)=p\left(y/\psi \right)={\displaystyle \int p\left(y,\theta /\psi \right)\text{d}\theta ={\displaystyle \int p\left(y/\theta ,\psi \right)p\left(\theta /\psi \right)\text{d}\theta}}$ (10)

where $\psi ={\left(\phi ,{\sigma}_{\eta},{\sigma}_{\epsilon}\right)}^{\prime}$ , $\theta ={\left({h}_{1},\cdots ,{h}_{T}\right)}^{\prime}$ . An efficient way of evaluating such ex- pressions is by using importance sampling; see Ripley [43] , Chapter 5). A simulation device is required to sample from an importance density $p\left(y,\theta /\psi \right)$ which is preferred to be as close as possible to the true density $p\left(y,\theta /\psi \right)$ . A choice for the importance density is the conditional Gaussian density since in this case it is relatively straightforward to sample from $p\left(y,\theta /\psi \right)=g\left(y,\theta /\psi \right)$ using simulation smoothers such as the ones developed by de Jong and Shephard [44] and Durbin and Koopman [42] . All models were estimated using programs written in the Ox language of Doornik [45] using SsfPack by Koopman, She- phard and Doornik [46] . The log-normal SV model which is estimated by quasi-maximum likelihood with the kalman filter, and the two-regime switching model which is estimated by maximum likelihood with the Hamilton filter. The Ox programs were downloaded from http://personal.vu.nl/s.j.koopman/SJresearch.html.

The log-normal SV model is represented by Equation (4) with ε_{t} and η_{t} independent Gaussian white noises. Their variances are 1 and
${\sigma}_{\eta}^{2}$ , respectively. The volatility equation is characterized by the constant parameter α, the autoregressive parameter β and the variance
${\sigma}_{\eta}^{2}$ of the volatility noise. The mean is either imposed equal to zero or estimated with the empirical mean of the series. Since the specification of the conditional volatility is an autoregressive process of order one, the stationarity condition is |β| < 1. Moreover, the volatility σ_{η} must be strictly positive. In the estimation procedure the following logistic and logarithm reparameterizations:

$\beta =2\left(\frac{\mathrm{exp}\left(b\right)}{1+\mathrm{exp}\left(b\right)}\right)-1$ and ${\sigma}_{n}=\mathrm{exp}\left({s}_{\eta}\right)$ (11)

have been considered in order to satisfy these conditions.

The second model is a particular specification of the regime switching model introduced by Hamilton, with the distribution of the returns is described by two regimes with the same mean but different variances and by a constant transition matrix:

${y}_{t}=\{\begin{array}{c}\mu +{\sigma}_{0}{\epsilon}_{t}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{s}_{t}=0\\ \mu +{\sigma}_{1}{\epsilon}_{t}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{s}_{t}=1\end{array}$ (12)

and

$\left[\begin{array}{cc}{p}_{00}& 1-{p}_{11}\\ 1-{p}_{00}& {p}_{11}\end{array}\right]$

where s_{t} is a two-state Markov chain independent of ε_{t}, which is a Gaussian white noise with unit variance. The parameters of this model are the mean μ, the low and high standard deviation σ_{0}, σ_{1} and the transition probabilities p_{00}, p_{11} (also called regime transformations probabilities). As for the log-normal SV model, the logarithm and the logistic transformations ensure the positiveness of the volatilities and constrain the transition probabilities to assume values in the (0, 1) interval. Further, for the log-normal SV model the returns are modified as follows:
${y}_{t}^{*}=\mathrm{log}\left({y}_{t}-{\stackrel{\xaf}{y}}_{t}\right)+1.27$ where
${\stackrel{\xaf}{y}}_{t}$ is the empirical mean. Thus, for the log-normal SV model the mean is not estimated but is simply set equal to the empirical mean. For the estimation, the starting values of the parameters are calculated considering the time series analyzed. For example, the sample mean is used as an approximation of the mean of the switching regime model and the empirical variance multiplied by appropriate factors is used for the high and low variance. However, for the log-normal SV model, a range of possible values of the parameters were fixed and a value is randomly extracted.

4. Estimation Results

We examine the behavior of the following equity markets. These are the S & P500 for USA, FTSE100 for United Kingdom, CAC40 for France, S & P/TSX for Canada, Nikkei225 for Japan, DAX for Germany, and Swiss Market for Switzerland. We use a sample from 11/4/1996 to 12/10/2008 resulting in 3158 data points. The price data was obtained from Datastream. Each of the price indices was transformed via first differencing of the log price data to create a series, which approximates the continuously compounded percentage return. The stock index prices are not adjusted for dividends following studies of French et al. [47] and Poon and Taylor [48] who found that inclusion of dividends affected estimation results only marginally. Returns are calculated on a continuously compounded basis and expressed in percentages, they are therefore calculated as
${r}_{t}=100\ast \left(\mathrm{log}\left({P}_{t}/{P}_{t-1}\right)\right)$ , where P_{t} denotes the stock index in day t.

The summary statistics are presented in Table 1. We observe that the Swiss Market shows the highest mean returns followed by CAC40 and then the DAX. All the indices exhibit similar patterns of volatility represented by the standard deviation, with Nikkei225 having the highest variability and S & P/TSX having the lowest. We further observe that the returns are highly autocorrelated at lag 1, with S & P/TSX maintaining the highest autocorrelation. The high first-order autocorrelation reflects the effects of non-synchronous or thin trading, whereas highly correlated squared returns can be seen as an indication of volatility clustering. The Q(12) and Q_{s}(12) test statistics, which is a joint test for the hypothesis that the first twelve autocorrelation coefficients on returns and squared returns are equal to zero, indicate that this hypothesis has to be rejected at the 1% significance level for all return series and squared return series. A number of empirical studies has found similar results on market returns distributional characteristics. Kim and Kon [49] showed similar results for 30 stocks in DJIA, S & P500, and CRSP indices. Campbell, Lo and Mackinlay [13] concluded that daily US stock indexes show negatively skewed and positive excess kurtosis. The autocorrelation of squared returns is consistent also with the presence of time- varying volatility such as GARCH effects. As pointed out by Lamoureux and

Table 1. Summary statistics of daily returns.

The table contains summary statistics for the international equity markets. J.B. is the Jarque-Bera normality test statistic with 2 degrees of freedom; ρ_{k} is the sample autocorrelation coefficient at lag k with asymptotic standard error
$1/\sqrt{T}$ and Q(k) is the Box-Ljung portmanteau statistic based on k-squared autocorrelations. ρ_{sk} are the sample autocorrelation coefficients at lag k for squared returns and Q_{s}(12) is the Box-Ljung portmanteau statistic based on 12-squared autocorrelations. * indicates significance at 99%. ** indicates significance at 95%. *** indicates significance at 90%.

^{5}Before estimating the models, we test whether there are indeed regime shifts in the stock markets and whether a stochastic volatility model fits the data well. To do so, we apply Hansen’s [52] modified likelihood ratio test for regimes and Kobayashi and Shi [53] tests. Results are available upon request.

Lastrapes [50] and confirmed by Hamilton and Susmel [51] , regime shifts in the volatility process can also induce a spuriously high degree of volatility clustering^{5}.

The estimation results of the two models are reported in Table 2 and Table 3. Table 2 presents the results of estimating the regime switching model in the different markets. For this model, we can judge the persistence of the volatility from the values taken by the transition (or persistence) probabilities p_{00} and p_{11}, they are all high and higher than 0.90, confirming the high persistence of the volatility in all markets. The parameter which govern the mean process is also reported in the first column of Table 2 with the corresponding standard errors. The mean parameter is positive and statistically significant for all series, except being negative for Nikkei225. The Japanese market is the exception since it had gone through major structural changes during the sample period, in terms of its risk and return characteristics. The estimation results of the log-normal SV model are reported in Table 3. For this model, the standard errors are calculated following Ruiz [37] for the log-normal SV model and as the inverse of the information matrix for the switching model. In both cases the z-statistics asymptotically follow an N(0, 1) distribution. All markets show strong persistence, since all the estimated autoregressive coefficients of the volatility equation (β) are higher than 0.90. Also all the volatility estimates are all highly significant and

Table 2. Results of the regime switching model applied to international equity markets.

The table reports the estimation results of the two regime switching model. A two-regimes switching model introduced by Hamilton is applied to equity markets and estimated by maximum likelihood with the Hamilton filter. In this model the returns are distributed with the same mean and different variances and a constant transition matrix. The standard errors are calculated following Ruiz [37] as the inverse of the information matrix for the switching model and result in z-statistics asymptotically following an N(0, 1) distribution. μ is the mean value and LogL represents the loglikelihood. * indicates significance at 99%. ** indicates significance at 95%. *** indicates significance at 90%.

Table 3. Results of estimating the log-normal SV model applied to international equity markets.

The table reports the estimation results of the log-normal SV model. The log-normal SV model is applied to equity markets and estimated by quasi-maximum likelihood with the kalman filter. The volatility equation is characterized by the constant parameter α (constant), the autoregressive parameter β (AR part) and the variance ${\sigma}_{\eta}^{2}$ of the volatility noise (SD). The standard errors are calculated following Ruiz [37] for the log-normal SV model and result in z-statistics asymptotically following an N(0, 1) distribution. * indicates significance at 99%. ** indicates significance at 95%. *** indicates significance at 90%.

quite similar for all markets. In practice, for many financial time series this coefficient is often found to be bigger than 0.90. This near-unity volatility persistence for high-frequency data is consistent with findings from both the SV and the GARCH literature. Among all the markets, the Swiss market, FTSE100, Nikkei225 and DAX show the highest variability in their volatility noise. For example, the standard deviation of the volatility noise in the FTSE100 is 0.1066, while that in the S & P500 is 0.071.

A graphical representation is provided from both models, yet we only include a sample of the Japanese market to save space. In the case of the log-normal SV model, the estimated volatility is obtained by using the Kalman smoother which is not very useful. Thus, a first-order Taylor expansion of is considered and compute the conditional mean and estimated the volatility. In the case of the switching model, we present historical return series, the estimated volatility and the estimated switches between regimes. Figure 1 and Figure 2 present the Japanese market. It can be seen from the graphs how the two models are able to capture some major market crises during the sample period, like the 1997 Asian financial market crisis, the collapse of LTCM in 1998, the tech bubble in 2000 and the 911 in 2001. All the other graphs are available from the author for inspection and capture those events.

The Japanese market is a special case where volatility forecasted from the regime switching model is the highest among all markets, an indication of some structural changes that took place during the sample period. Equity price volatility

Figure 1. Weighted volatility and regime shifts based on the regime switching model for Japan.

Figure 2. Estimated and simulated volatility based on the log-AR stochastic volatility model for Japan.

has trended up since the mid-1990s, and has been particularly high since 2000, as the Technology bubble burst, followed by shocks such as the events of September 11, 2001, the Enron and WorldCom accounting scandals. In the aftermath of the Louvre Accord, the Bank of Japan kept interest rates down to support the value of the dollar and to boost Japan’s domestic economy, stimulating demand for equities. Easy monetary conditions encouraged leveraged investment, aggressive equity financing, and excessive borrowing. The stock market were also amplified by portfolio insurance products and by arbitrage activities between stock and futures markets. Lending based on land and, to a lesser extent, equities as collateral amplified Japan’s financial bubble and the subsequent burst. Further, in February 1999, to abate deflationary pressures, the Bank of Japan adopted the zero interest rate policy. At the same time, a series of deregulations was introduced to improve the efficiency of the financial system and the government promoted financial consolidation. Mark-to market accounting was introduced and several agencies were established by the government to purchase nonperforming loans and shares held by banks. Consequently, the financial system became more volatile^{6}.

^{6}We thank a referee for pointing out at this point.

5. Value-at-Risk Results

Value-at-Risk (VaR) indicates the maximum potential loss at a given level of confidence (p) for a portfolio of financial assets over a specified time horizon (h). The VaR is a solution to the following problem:

$p={\displaystyle {\int}_{-\infty}^{VaR\left(h,p\right)}f\left({x}_{t+h}\right)\text{d}x}$ (13)

with x being the value of the portfolio. Different methods have been proposed to calculate the VaR. One of them is the parametric model that can be used to forecast the portfolio return distribution, if this distribution is known in a closed form and the VaR simply being the quantile of this distribution. In the case of non-linearity we can use either Monte Carlo simulation or historical simulation approaches. The advantage of the parametric approach is that the factors can be updated using a general model of changing volatility. Having chosen the asset or portfolio distribution, it is possible to use the forecasted volatility to characterize the future return distribution. Thus, a conditional forecasted volatility measure,
${\stackrel{^}{\sigma}}_{T+1/T}$ can be used to calculate the VaR over the next period. In our case, a different approach using both models, the stochastic volatility and regime switching models, is to devolatize the observed return series and to revolatilize it with an appropriate forecasted value, obtained with a particular model of changing volatility. This approach is considered in several recent works (Barone-Adesi et al. [54] ; Hull and White [12] ; and Christoffersen [55] ). This method is labeled also under the filtered historical simulation method to investigate the nonparametric distribution-based VaR^{7}.

^{7}The historical simulation method discards particular assumptions regarding the return series and calculates the VaR from the immediate past history of the returns series (Dowd, [56] ). However, the filtered historical simulation method is designed to improve on the shortcomings of historical simulation by augmenting the model-free estimates with parametric models. For example, Prisker [57] asserts that filtered historical simulation method compares favorably with historical simulation, the historical simulation method may not avoid the many shortcomings of purely model-free estimation approaches. When historical return series include insufficient extreme outcomes, the simulated value at risk may seriously undersestimate the actual market risk.

The idea is to consider a portfolio which perfectly replicates the composition of each stock market index. Given the estimated volatility of the stochastic volatility model, the Value-at-Risk of this portfolio can be obtained following the procedure proposed in Barone-Adesi et al. [54] . The historical portfolio returns are rescaled by the estimated volatility series to obtain the standardized residuals
${u}_{t}={y}_{t}/{\sigma}_{t}$ ,
$t=1,\cdots ,T$ . This historical simulation can be performed by boos- trapping the standardized returns to obtain the desired number of residuals
${u}_{j}^{*}$ ^{ }for
$j=1,\cdots ,M$ , where M can be arbitrarily large. To calculate the next period return, it is sufficient to multiply the simulated residuals by the forecasted volatility
${\stackrel{^}{\sigma}}_{T+1/T}:{y}_{j}^{*}={u}_{j}^{*}{\stackrel{^}{\sigma}}_{T+1/T}$ and then the VaR for the next day, at the desired level of confidence h, is calculated as the Mth element of these returns sorted in ascending order.

To make the historical simulation consistent with empirical findings, we use the two models: the log-normal SV model and the regime switching model to describe the volatility behavior. Then, past returns are standardized by the estimated volatility to obtain the standardized residuals. We obtained those residuals and our statistical tests confirm that these standardized residuals behave approximately as an iid series which exhibit heavy tails. Then we use the historical simulation to calculate the Value-at-Risk measures. Finally, to adjust them to the current market conditions, the randomly selected standardized residuals are multiplied by the forecasted volatility obtained from the stochastic volatility and regime switching models.

The VaRs measures from the two models are presented together with the results obtained from the unconditional returns in Table 4 and Table 5. An examination of the results reveals that the VaR estimates, in general, are higher for the stochastic volatility model than those for the regime-switching model for almost all markets and over all horizons. The exception is that of Japan repre- sented by the Nikkei225 index, where in both cases, whether using historical

Table 4. VaR measures obtained by using historical simulation method.

The table reports the value-at-risk VaR estimates based on conditional and unconditional distribution of the returns and calculated by historical simulation method. The VaR are calculated for 5-, 10- and 15-days holding period with the significance level is 1%. Unconditional distribution measures are based on historical returns, while conditional distribution are those obtained by weighting the standardized residuals by the forecasted volatility. Values reported are in percentage terms.

Table 5. VaR obtained by delta-normal approximation.

The table reports the VaR estimates based on historical data. The significance level is 1% and VaR are calculated based on 5-, 10- and 15-days time horizons. Unconditional distribution measures are based on historical returns, while conditional distribution are those obtained by weighting the standardized residuals by the forecasted volatility. Values reported are in percentage terms.

simulation or delta-normal approximation, the stochastic volatility model generates lower VaR values than those obtained from the regime switching model. Then comparing the VaRs calculated directly from the two models with those obtained from the unconditional distribution of returns, we find that the two models generate smaller VaRs. When we consider the time horizon and its impact on the calculation of the Value-at-Risk measures, we find that VaRs increase with the time horizon; generally, and according to the regime switching model, VaRs increase more slowly with horizon than the SV approach.

6. Backtesting the VaR Results

The Value-at-Risk $Va{R}_{t+1}^{p}$ measure promises that the actual return will only be worse than the $Va{R}_{t+1}^{p}$ forecast p * 100 of the time. Given a time series of past ex-ante VaR forecasts and past ex-post returns, we can define the “hit sequence” of VaR violations as:

${I}_{t+1}=\{\begin{array}{l}1,\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{R}_{pf,t+1}<-Va{R}_{t+1}^{p}\hfill \\ 0,\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{R}_{pf,t+1}>Va{R}_{t+1}^{p}\hfill \end{array}$ (14)

^{8}For other methods and elements in backtesting VaR models, see Christoffersen and Diebold [59] , Christoffersen and Pelletier [60] , McNeil and Frey [61] , Diebold, Gunther, and Tsay [62] , and Diebold, Hahn, and Tsay [63] .

The hit sequence returns a 1 on day t + 1 if the loss on that day was larger than the VaR number predicted in advance for that day. If the VaR was not violated, then the hit sequence returns a 0. When backtesting our models, we construct a sequence
${\left\{{I}_{t+1}\right\}}_{t+1}^{T}$ across T days indicating when the past violations occurred. We implement the following three test statistics derived from Christo- ffersen [58] : the unconditional, independence, and conditional coverage^{8}. Chris- toffersen [58] idea is to separate out the particular predictions being tested, and then test each prediction separately. The first of these is that the model generates the “correct” frequency of exceedances, which is in this context is described as the prediction of correct unconditional coverage. The other prediction is that exceedances are independent of each other. This later prediction is important in so far as it suggests that exceedances should not be clustered over time. To explain the Christoffersen [58] approach, we briefly explain the three tests.

6.1. Unconditional Coverage Testing

According to this test, we are interested in testing if the fraction of violations obtained from our models, call it π, is significantly different from the promised fraction, p. We call this the unconditional coverage hypothesis. To test this, we write the likelihood of an i.i.d. Bernoulli (π) hit sequence as:

$L\left(\pi \right)={\displaystyle \underset{t=1}{\overset{T}{\prod}}{\left(1-\pi \right)}^{1-{I}_{t+1}}{\pi}^{{I}_{t+1}}}={\left(1-\pi \right)}^{{T}_{0}}{\pi}^{{T}_{1}}$ (15)

where T_{0} and T_{1} are the number of 0s and 1s in the sample. π can be estimated from
$\pi ={T}_{1}/T$ ―that is, the observed fraction of violations in the sequence. Plugging the estimate back into the likelihood function gives the optimized likelihood as:

$L\left(\pi \right)={\left(1-{T}_{1}/T\right)}^{{T}_{0}}{\left({T}_{1}/T\right)}^{{T}_{1}}$ .

Under the unconditional coverage null hypothesis that π = p, where p is the known VaR coverage rate, we have the likelihood:

$L\left(p\right)={\displaystyle \underset{t=1}{\overset{T}{\prod}}{\left(1-p\right)}^{1-{I}_{t+1}}{p}^{{I}_{t+1}}}={\left(1-p\right)}^{{T}_{0}}{p}^{{T}_{1}}$

The unconditional coverage hypothesis using a likelihood ratio test can be checked as:

$L{R}_{uc}=-2\mathrm{ln}\left[L\left(p\right)/L\left(\stackrel{^}{\pi}\right)\right]$ (16)

Asymptotically, as T goes to infinity, this test will be distributed as a χ^{2} with one degree of freedom. Substituting in the likelihood functions, we write:

$L{R}_{uc}=-2\mathrm{ln}\left[{\left(1-p\right)}^{{T}_{0}}{p}^{{T}_{1}}/\left\{{\left(1-{T}_{1}/T\right)}^{{T}_{0}}{\left({T}_{1}/T\right)}^{{T}_{1}}\right\}\right]$ (17)

which follows a χ^{2}. The VaR model is rejected or accepted either using a specific critical value, or calculating the P-value associated with our test statistic.

6.2. Independence Testing

According to this test, the hit sequence is assumed to be dependent over time and that it can be described as a so-called first-order Markov sequence with transition probability matrix:

${\Pi}_{1}=\left[\begin{array}{cc}1-{\pi}_{01}& {\pi}_{01}\\ 1-{\pi}_{11}& {\pi}_{11}\end{array}\right]$ .

These transition probabilities simply mean that conditional on today being a nonviolation (that is, I_{t} = 0), then the probability of tomorrow being a violation (that is, I_{t}_{+1} = 1) is π_{01}. The probability of tomorrow being a violation given today is also a violation is: π_{11} = Pr(I_{t} = 1 and I_{t}_{+1} = 1). Accordingly, the two probabilities π_{01} and π_{11} describe the entire process. The probability of a nonviolation following a nonviolation is 1 − π_{01}, and the probability of a nonviolation following a violation is 1 − π_{11}. If we observe a sample of T observations, then the likelihood function of the first-order Markov process can be written as:

$L\left({\Pi}_{t}\right)={\left(1-{\pi}_{01}\right)}^{{T}_{00}}{\pi}_{01}^{{T}_{01}}{\left(1-{\pi}_{11}\right)}^{{T}_{10}}{\pi}_{11}^{{T}_{11}}$

where T_{ij}, i, j = 0, 1 is the number of observations with a j following an i. Taking first derivatives with respect to π_{01} and π_{11} and setting these derivatives to zero, we can solve for the maximum likelihood estimates:

${\pi}_{01}=\left({T}_{01}/\left({T}_{00}+{T}_{01}\right)\right)$ and ${\pi}_{11}=\left({T}_{11}/\left({T}_{10}+{T}_{11}\right)\right)$ .

Using the fact that the probabilities have to sum to one, we have: π_{00} = 1 − π_{01} and π_{10} = 1 − π_{11}, which can be used to determine the matrix of the estimated transition probabilities.

In the case of the hits being independent over time, then the probability of a violation tomorrow does not depend on today being a violation or not, and we can write π_{01} = π_{11} = π. In this case, we can test the independence hypothesis that π_{01} = π_{11} using a likelihood ratio test:

$L{R}_{ind}=-2\mathrm{ln}\left[L\left(\stackrel{^}{\pi}\right)/L\left({\stackrel{^}{\Pi}}_{1}\right)\right]$ (18)

following a
${\chi}_{1}^{2}$ . Where L(π) is the likelihood under the alternative hypothesis from the LR_{uc} test.

Although the LR_{uc} test can reject a model that either overestimates or underestimates the true but unobservable VaR, it cannot examine whether the exceptions are randomly distributed. In a risk management framework, it is important that VaR exceptions be uncorrelated over time, which prompts independence and conditional coverage tests based on the evaluation of interval forecasts. Christoffersen [58] developed independence and conditional coverage tests that jointly investigates whether the total number of failures is equal to the expected one, and the VaR exceptions are independently distributed. In particular, the advantage of Christoffersen’s procedure is that it can reject a model that generates either too many or too few clustered exceptions. Since accurate VaR estimates exhibit the property of correct conditional coverage, the hit sequence series must exhibit both correct unconditional coverage and serial independence.

6.3. Conditional Coverage Testing

Ultimately, we care about simultaneously testing if the VaR violations are independent and the average number of violations is correct. We can test jointly for independence and correct coverage using the conditional coverage test:

$L{R}_{cc}=-2\mathrm{ln}\left[L\left(p\right)/L\left({\stackrel{^}{\Pi}}_{1}\right)\right]$ (19)

again following a
${\chi}_{2}^{2}$ distribution and correspond to testing that π_{01} = π_{11} = p. It can be proved that LR_{cc} = LR_{uc} + LR_{inp}. The Christoffersen approach enables us to test both coverage and independence hypotheses at the same time. Moreover, if the model fails a test of both hypotheses combined, his approach enable us to test each hypothesis separately, and so establish where the model failure arises.

The results for the unconditional and conditional coverage tests are reported in Table 6 and Table 7. Table 6 reports the results based on the stochastic volatility model, and Table 7 reports those based on the regime switching model. The symbol * indicates that the test did reject the null hypothesis. We use two significance levels of 5% and 1%. If LR_{uc} is statistically insignificant, it implies that the expected and the actual number of observations falling below the VaR estimates are statistically the same. Further, rejection of the null hypothesis indicates that the computed VaR estimates are not sufficiently accurate. According to the LR_{uc} test statistics, and at the 5% significance levels, VaR models based on both the stochastic volatility and regime switching models perform relatively the same for all markets, except for FTSE100, where the LR_{uc} rejects the null hypothesis. However, according to the LR_{ind} and LR_{cd}, the VaR models based on the two volatility models perform again relatively in a similar fashion. The performance of both models at the 5% significance level is the worst for the S & P/TSX;

Table 6. Unconditional, conditional and independence coverage tests based on log-normal stochastic volatility model.

The table reports the unconditional, conditional and independence coverage tests based on the Log-Normal Stochastic Volatility model. * indicates rejection of the VaR model.

Table 7. Unconditional, conditional and independence coverage tests based on regime switching model.

The table reports the unconditional, conditional and independence coverage tests based on the regime switching model. * indicates rejection of the VaR model.

this is because of the rejection of both tests and the failure of both models to provide an accurate prediction of the downside risk at the 5% significance level. Further, the backtesting results indicate that the regime switching model performs poorly for the DAX series using the LR_{ind} test.

7. Conclusion

This paper proposes two models, namely the log stochastic volatility model and regime switching model for calculating value at risk. The two models were applied for international equity markets and then used to forecast future daily volatility. Then based on the forecasted daily volatility, we calculated the Value at Risk in each market. It was observed that the two models generate smaller VaRs than the unconditional distributional method. Then, based on each model, it was found that the Japanese market display lower values of Value at Risk under the stochastic volatility model than under the regime switching model. Considering how the VaRs increase with time horizon, generally and according to the regime switching model, VaRs increase more slowly with horizon than the stochastic volatility model. Finally, we backtest each model and find that the performance of both models is the worst for the S & P/TSX, while the regime switching model does not perform well for the DAX series in some cases. The results have significant implications for risk management, trading and hedging activities as well as in the pricing of equity derivatives.

References

[1] Black, F. (1976) Studies of Stock Price Volatility Changes. Proceedings of the 1976 Meetings of the American Statistical Association, Business and Economical Statistics Section, 177-181.

[2] Engle, R., Lilien, D. and Robins, R. (1987) Estimating Time-Varying Risk Premia in the Term Structure. The ARCH-M Model. Econometrica, 55, 391-407.

https://doi.org/10.2307/1913242

[3] Andersen, T.G., Bollerslev, T. and Diebold, F.X. (2005) Parametric and Nonparametric Volatility Measurement. In: Hansen, L.P. and Ait-Sahalia, Y., Eds., Handbook of Financial Econometrics, North Holland, Amsterdam, 67-137.

[4] Poon, S.H. and Granger, C.W.J. (2003) Forecasting Volatility in Financial Markets: A Review. Journal of Economic Literature, 41, 478-539.

https://doi.org/10.1257/.41.2.478

[5] Bollerslev, T., Chou, R.Y. and Kroner, K. (1992) ARCH Modelling in Finance: A Review of the Theory and Empirical Evidence. Journal of Econometrics, 52, 5-59.

[6] Bera, A.K. and Higgins, M.L. (1993) ARCH Modes: Properties, Estimation and Testing. Journal of Economic Surveys, 7, 305-366.

https://doi.org/10.1111/j.1467-6419.1993.tb00170.x

[7] Bollerslev, T., Engle, R.F. and Nelson, D.B. (1994) ARCH Models. In: Engle, R.F. and McFadden, D.L., Eds., Handbook of Econometrics, Vol. 4, Elsevier Science, Amsterdam, 2959-3038.

[8] Taylor, S.J. (1986) Modelling Financial Time Series. Wiley, Chichester.

[9] Ghysels, E., Harvey, A.C. and Renault, E. (1996) Stochastic Volatility. In: Maddala, G.S. and Rao, C.R., Eds., Handbook of Statistics, Vol. 14, Statistical Methods in Finance, North-Holland, Amsterdam, 128-198.

[10] Shephard, N. (1996) Statistical Aspects of ARCH and Stochastic Volatility. In: Cox, D.R., Hinkley, D.V. and Barndorff-Nielsen, O.E., Eds., Time Series Models in Econometrics, Finance and Other Fields, Monographs on Statistics and Applied Probability, Vol. 65, Chapman and Hall, 1-67.

https://doi.org/10.1007/978-1-4899-2879-5_1

[11] Broto, C. and Ruiz, E. (2004) Estimation Methods for Stochastic Volatility Models: A Survey. Journal of Economic Surveys, 18, 613-649.

https://doi.org/10.1111/j.1467-6419.2004.00232.x

[12] Hull, J. and White, A. (1987) The Pricing of Options on Assets with Stochastic Volatilities. The Journal of Finance, 42, 281-300.

https://doi.org/10.1111/j.1540-6261.1987.tb02568.x

[13] Campbell, J.Y., Lo, A.W. and Mackinlay, A.C. (1997) The Econometrics of Financial Markets. Princeton Press.

[14] Koopman, S. and Hol Uspensky, E. (2002) The Stochastic Volatility in Mean Model: Empirical Evidence from International Stock Markets. Journal of Applied Econometrics, 17, 667-689.

https://doi.org/10.1002/jae.652

[15] Goldfeld, S.M. and Quandt, R.E. (1973) A Markov Model for Switching Regressions. Journal of Econometrics, 1, 3-16.

[16] Hamilton, J.D. (1989) A New Approach of the Economic Analysis of Non-Sta-tionary Time Series and the Business Cycle. Econometrica, 57, 357-384.

https://doi.org/10.2307/1912559

[17] LeBaron, B. (1992) Some Relationships between Volatility and Serial Correlations in Stock Market Returns. Journal of Business, 65, 199-219.

https://doi.org/10.1086/296565

[18] Timmerman, A. (2000) Moments of Markov Switching Models. Journal of Econometrics, 96, 75-111.

[19] Ang, A. and Bekaert, G. (2002) International Asset Allocation with Regime Shifts. Review of Financial Studies, 15, 1137-1187.

https://doi.org/10.1093/rfs/15.4.1137

[20] Jorion, P. (1988) On Jump Processes in the Foreign Exchange and Stock Markets. Review of Financial Studies, 1, 427-445.

https://doi.org/10.1093/rfs/1.4.427

[21] Akgiray, V. and Booth, G. (1988) Mixed Diffusion-Jump Process Modelling of Exchange Rate Movements. Review of Economics and Statistics Studies, 70, 631-637.

https://doi.org/10.2307/1935826

[22] Bates, D. (1996) Jumps and Stochastic Volatility: Exchange Rate Processes Implicit in Deutsche Mark Options. Review of Financial Studies, 9, 69-107.

https://doi.org/10.1093/rfs/9.1.69

[23] Bekaert, G, Erb, C., Harvey, C. and Viskanta, T. (1998) Distributional Characteristics of Emerging Market Returns and Asset Allocation. Journal of Portfolio Management, 24, 102-116.

https://doi.org/10.3905/jpm.24.2.102

[24] Asgharian, H. and Bengtsson, C. (2006) Jump Spillover in International Equity Markets. Journal of Financial Econometrics, 2, 167-203.

https://doi.org/10.1093/jjfinec/nbj005

[25] Ang, A. and Chen, J. (2002) Asymmetric Correlations of Equity Portfolios. Journal of Financial Economics, 63, 443-494.

[26] Longin, F. and Solnik, B. (1995) Is the Correlation in International Equity Returns Constant? Journal of International Money and Finance, 14, 3-26.

[27] Karolyi, A. and Stulz, R. (1996) Why Do Markets Move Together? An Investigation of U.S.-Japan Stock Return Movements. Journal of Finance, 51, 951-986.

https://doi.org/10.1111/j.1540-6261.1996.tb02713.x

[28] Chakrabarti, R. and Roll, R. (2000) East Asian and Europe during the 1997 Asian Collapse: A Clinical Study of a Financial Crisis. Working Paper, UCLA.

[29] Chollete, L., de la Pena, V. and Lu, C.-C. (2011) International Diversification: A Copula Approach. Journal of Banking and Finance, 35, 403-417.

[30] Buraschi, A., Porchia, P. and Trojani, F. (2010) Correlation Risk and Optimal Portfolio Choice. Journal of Finance, 65, 393-420.

https://doi.org/10.1111/j.1540-6261.2009.01533.x

[31] Kuester, K., Mittnik, S. and Paolella, M. (2006) Value-at-Risk Prediction: A Comparison of Alternative Strategies. Journal of Financial Econometrics, 1, 53-89.

[32] Engle, R. (1982) Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of U.K. Inflation. Econometrica, 50, 987-1008.

https://doi.org/10.2307/1912773

[33] Andersen, T.G., Bollerslev, T., Diebold, F.X. and Labys, P. (2001) The Distribution of Realized Exchange Rate Volatility. Journal of the American Statistical Association, 96, 42-55.

https://doi.org/10.1198/016214501750332965

[34] Andersen, T.G., Bollerslev, T., Diebold, F.X. and Labys, P. (2003) Modeling and Forecasting Realized Volatility. Econometrica, 71, 579-626.

https://doi.org/10.1111/1468-0262.00418

[35] Melino, A. and Turnbull, S.M. (1990) Pricing Foreign Currency Options with Stochastic Volatility. Journal of Econometrics, 45, 239-265.

[36] Harvey, A.C., Ruiz, E. and Shephard, N. (1994) Multivariate Stochastic Variance Models. Review of Economic Studies, 61, 247-264.

https://doi.org/10.2307/2297980

[37] Ruiz, E. (1994) Quasi-Maximum Likelihood Estimation of Stochastic Volatility Models. Journal of Econometrics, 63, 289-306.

[38] Gallant, A.R., Hsieh, D.A. and Tauchen, G.E. (1997) Estimation of Stochastic Volatility Models with Diagnostics. Journal of Econometrics, 81, 159-192.

[39] Jacquier, E., Polson, N.G. and Rossi, P.E. (1994) Bayesian Analysis of Stochastic Volatility Models (with Discussion) Journal of Business and Economics Statistics, 12, 371-389.

[40] Kim, S., Shephard, N. and Chib, S. (1998) Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models. Review of Economics Studies, 65, 361-393.

https://doi.org/10.1111/1467-937X.00050

[41] Shephard, N. and Pitt, M. (1997) Likelihood Analysis of Non-Gaussian Measurement Time Series. Biometrika, 84, 653-667.

https://doi.org/10.1093/biomet/84.3.653

[42] Durbin, J. and Koopman, S. (1997) Monte Carlo Maximum Likelihood Estimation for Non-Gaussian State Space Models. Biometrika, 84, 669-684.

https://doi.org/10.1093/biomet/84.3.669

[43] Ripley, B. (1987) Stochastic Simulation. Wiley, New York.

https://doi.org/10.1002/9780470316726

[44] De Jong, P. and Shephard, N. (1995) The Simulation Smoother for Time Series Models. Biometrika, 82, 339-350.

https://doi.org/10.1093/biomet/82.2.339

[45] Doornik, J. (1998) Object-Oriented Matrix Programming Using Ox 2.0. Timberlake Consultants Ltd., London.

http://www.nuff.ox.ac.uk/Users/Doornik

[46] Koopman, S., Shephard, N. and Doornik, J. (1999) Statistical Algorithms for Models in State Space Using Ssfpack 2.2. Econometrics Journal, 2, 113-166.

http://www.ssfpack.com

https://doi.org/10.1111/1368-423X.00023

[47] French, K.R., Schwert, G.W. and Stanbaugh, R.F. (1987) Expected Stock Returns and Volatility. Journal of Financial Economic, 19, 3-29.

[48] Poon, S. and Taylor, S.J. (1992) Stock Returns and Volatility: An Empirical Study of the UK Stock Market. Journal of Banking and Finance, 16, 37-59.

[49] Kim, D. and Kon, S. (1994) Alternative Models for the Conditional Heteroscedasticity of Stock Returns. Journal of Business, 67, 563-598.

https://doi.org/10.1086/296647

[50] Lamoureux, C.G. and Lastrapes, W.D. (1990) Persistence in Variance, Structural Change and the GARCH Model. Journal of Business and Economic Statistics, 8, 225-243.

https://doi.org/10.1080/07350015.1990.10509794

[51] Hamilton, J.D. and Susmel, R. (1994) Autoregressive Conditional Heteroskedasticity and Changes in Regime. Journal of Econometrics, 64, 307-333.

[52] Hansen, B. (1992) The Likelihood Ratio Test under Nonstandard Conditions: Testing the Markov Switching Model of GNP. Journal of Applied Econometrics, 7, S61-S82.

https://doi.org/10.1002/jae.3950070506

[53] Kobayashi, M. and Shi, X. (2003) Testing for EGARCH against Stochastic Volatility Models. Journal of Time Series Analysis, 26, 135-150.

https://doi.org/10.1111/j.1467-9892.2005.00394.x

[54] Barone-Adesi, G., Burgoin, F. and Giannopoulos, K. (1998) Don’t Look Back. Risk, 11, 100-104.

[55] Christoffersen, P. (2003) Elements of Financial Risk Management, Academic Press, San Diego.

[56] Dowd, K. (1998) Beyond Value at Risk: The New Science of Risk Management. Wiley, New York.

[57] Prisker, M. (2006) The Hidden Dangers of Historical Simulation. Journal of Banking and Finance, 30, 561-582.

[58] Christoffersen, P. (1998) Evaluating Interval Forecasts. International Economic Review, 39, 841-862.

https://doi.org/10.2307/2527341

[59] Christoffersen, P. and Diebold, F. (2000) How Relevant Is Volatility Forecasting for Financial Risk Management? Review of Economics and Statistics, 82, 12-22.

https://doi.org/10.1162/003465300558597

[60] Christoffersen, P. and Pelletier, D. (2003) Backtesting Portfolio Risk Measures: A Duration-Based Approach, Manuscript, McGill University and CIRANO.

[61] McNeil, A. and Frey, R. (2000) Estimation of Tail-Related Risk Measures for Heteroskedastic Financial Time Series: An Extreme Value Approach. Journal of Empirical Finance, 7, 271-300.

[62] Diebold, F.X., Gunther, T. and Tsay, A. (1998) Evaluating Density Forecasts, with Applications to Financial Risk Management. International Economic Review, 39, 863-883.

https://doi.org/10.2307/2527342

[63] Diebold, F.X., Hahn, J. and Tsay, A. (1999) Multivariate Density Forecasts Evaluation and Calibration in Financial Risk Management: High Frequency Returns on Foreign Exchange. Review of Economics and Statistics, 81, 661-673.

https://doi.org/10.1162/003465399558526