\( % Math operators \newcommand{\argmax}{\arg\!\max} \newcommand{\argmin}{\arg\!\min} \newcommand\ev[1]{E\left\langle#1\right\rangle} \newcommand\vv[1]{V\left\langle#1\right\rangle} % Math commands \newcommand{\mat}[1]{\mathbf{#1}} % Math symbols \renewcommand{\NN}{\mathcal{N}} \renewcommand{\LL}{\mathcal{L}} \renewcommand{\RR}{\mathbb{R}} \)

Input-Output Hidden Markov Model applied to financial time series

Luis Damiano, Brian Peterson, Michael Weylandt

2017-07-02

This work aims at replicating the Input-Output Hidden Markov Model (IOHMM) originally proposed by Hassan and Nath (2005) to forecast stock prices. The main goal is to produce public programming code in Stan (Carpenter et al. 2016) for a fully Bayesian estimation of the model parameters and inference on hidden quantities, namely filtered state belief, smoothed state belief, jointly most probable state path and fitted output. The model is introduced only briefly, a more detailed mathematical treatment can be found in our literature review.

The authors acknowledge Google for financial support via the Google Summer of Code 2017 program.


1 The Input-Output Hidden Markov Model

The IOHMM is an architecture proposed by Bengio and Frasconi (1995) to map input sequences, sometimes called the control signal, to output sequences. It is a probabilistic framework that can deal with general sequence processing tasks such as production, classification, or prediction. The main difference with Hidden Markov Models (HMM), which are part of the unsupervised learning paradigm, is the capability to learn the output sequence itself instead of the distribution of the output sequence.

1.1 Definitions

As with HMM, IOHMM involves two interconnected models,

\[\begin{align*} z_{t} &= f(z_{t-1}, \mat{u}_{t}) \\ \mat{x}_{t} &= g(z_{t }, \mat{u}_{t}). \end{align*}\]

The first line corresponds to the state model, which consists of discrete-time, discrete-state hidden states \(z_t \in \{1, \dots, K\}\) whose transition depends on the previous hidden state \(z_{t-1}\) and the input vector \(\mat{u}_{t} \in \RR^M\). Additionally, the observation model is governed by \(g(z_{t}, \mat{u}_{t})\), where \(\mat{x}_t \in \RR^R\) is the vector of observations, emissions or output. The corresponding joint distribution is

\[ p(\mat{z}_{1:T}, \mat{x}_{1:T} | \mat{u}_{t}). \]

In the proposed parametrization with continuous inputs and outputs, the state model involves a multinomial regression whose parameters depend on the previous state taking the value \(i\),

\[ p(z_t | \mat{x}_{t}, \mat{u}_{t}, z_{t-1} = i) = \text{softmax}^{-1}(\mat{w}_i \mat{u}_{t}), \]

and the observation model is built upon the Gaussian density with parameters depending on the current state taking the value \(j\),

\[ p(\mat{x}_t | \mat{u}_{t}, z_{t} = j) = \mathcal{N}(\mat{x}_t | \mat{\mu}_j, \mat{\Sigma}_j). \]

IOHMM adapts the logic of HMM to allow for input and output vectors, retaining its fully probabilistic quality. Hidden states are assumed to follow a multinomial distribution that depends on the input sequence. The transition probabilities \(\Psi_t(i, j) = p(z_t = j | z_{t-1} = i, \mat{u}_{t})\), which govern the state dynamics, are driven by the control signal as well.

As for the output sequence, the local evidence at time \(t\) now becomes \(\psi_t(j) = p(\mat{x}_t | z_t = j, \mat{\eta}_t)\), where \(\mat{\eta}_t = \ev{\mat{x}_t | z_t, \mat{u}_t}\) can be interpreted as the expected location parameter for the probability distribution of the emission \(\mat{x}_{t}\) conditional on the input vector \(\mat{u}_t\) and the hidden state \(z_t\).

1.2 Inference

There are several quantities of interest to be estimated via different algorithms. In this section, the discussion assumes that model parameters are known.

1.2.1 Filtering

A filter infers the belief state at a given step based on all the information available up to that point,

\[\begin{align*} \alpha_t(j) & \triangleq p(z_t = j | \mat{x}_{1:t}, \mat{u}_{1:t}). \end{align*}\]

It achieves better noise reduction than simply estimating the hidden state based on the current estimate \(p(z_t | \mat{x}_{t})\). The filtering process can be run online, or recursively, as new data streams in. Filtered marginals can be computed recursively by means of the forward algorithm (Baum and Eagon 1967).

1.2.2 Smoothing

A smoother infers the belief state at a given step based on all the observations or evidence,

\[ \begin{align*} \gamma_t(j) & \triangleq p(z_t = j | \mat{x}_{1:T}, \mat{u}_{1:T}) \\ & \propto \alpha_t(j) \beta_t(j), \end{align*} \]

where

\[\begin{align*} \beta_{t-1}(i) & \triangleq p(\mat{x}_{t:T} | z_{t-1} = i, \mat{u}_{t:T}). \end{align*}\]

Although noise and uncertainty are reduced as a result of conditioning on past and future data, the smoothing process can only be run offline. Inference can be run by means of the forwards-backwards algorithm, also know as the Baum-Welch algorithm (Baum and Eagon 1967, Baum et al. (1970)).

1.2.3 Most likely hidden path

It is also of interest to compute the most probable state sequence or path,

\[ \mat{z}^* = \argmax_{\mat{z}_{1:T}} p(\mat{z}_{1:T} | \mat{x}_{1:T}). \]

The jointly most probable sequence of states can be inferred by means of maximum a posterior (MAP) estimation or Viterbi decoding.

1.3 Parameter estimation

The parameters of the models are \(\mat{\theta} = (\mat{\pi}_1, \mat{\theta}_h, \mat{\theta}_o)\), where \(\mat{\pi}_1\) is the initial state distribution, \(\mat{\theta}_h\) are the parameters of the hidden model and \(\mat{\theta}_o\) are the parameters of the state-conditional density function \(p(\mat{x}_t | z_t = j, \mat{u}_t)\). The form of \(\mat{\theta}_h\) and \(\mat{\theta}_o\) depends on the specification of each model. For example, state transition may be characterized by a logistic or multinomial regression with parameters \(\mat{w}_k\) for \(k \in \{1, \dots, K\}\), while emissions may be modeled by a linear regression with Gaussian error with parameters \(\mat{b}_k\) and \(\mat{\Sigma}_k\) for \(k \in \{1, \dots, K\}\).


2 Learning by simulation

Model complexity and limited software availability requires that we implement our sampler from scratch. Following a very simplified version of the methodology proposed by Cook, Gelman, and Rubin (2006), we first create a simulation routine that generates data complying with the model assumptions and we then write a MCMC sampler in Stan to recover the parameters and other hidden quantities. Only after the correctness of our software is tested, we feed our model with real data and analyse the results.

We believe that learning by simulation has many benefits, including:

2.1 Auxiliary files

2.1.1 Math functions

We write an auxiliary R function to compute the softmax transformation of a given vector. The calculations are run in log scale for greater numerical stability, i.e. to avoid any overflow.

source('../common/R/math.R')

2.1.2 Plot functions

As plots are extensively used, we arrange the code in an auxiliary file.

source('../common/R/plots.R')

2.1.3 Simulation functions

We arrange the code to generate simulated data, as explained below, in an auxiliary file.

source('../iohmm-reg/R/iohmm-sim.R')

2.2 Generative model

2.2.1 Data simulation

We first write an R function for our generative model. The arguments are the sequence length \(T\), the number of discrete hidden states \(K\), the input matrix \(\mat{u}\), the initial state distribution vector \(\mat{\pi}_1\), a matrix with the parameters of the multinomial regression that rules the hidden states dynamics \(\mat{w}\), the name of a function drawing samples from the observation distribution and its arguments.

iohmm_sim <- function(T, K, u, w, p.init, obs.model, obs.pars) {
  m <- ncol(u)

  if (dim(u)[1] != T)
    stop("The input matrix must have T rows.")

  if (any(dim(w) != c(K, m)))
    stop("The transition weight matrix must be of size Kxm, where m is the size of the input vector.")

  if (length(p.init) != K)
    stop("The vector p.init must have length K.")

  p.mat <- matrix(0, nrow = T, ncol = K)
  p.mat[1, ] <- p.init

  z <- vector("numeric", T)
  z[1] <- sample(x = 1:K, size = 1, replace = FALSE, prob = p.init)
  for (t in 2:T) {
    p.mat[t, ] <- softmax(sapply(1:K, function(j) {u[t, ] %*% w[j, ]}))
    z[t] <- sample(x = 1:K, size = 1, replace = FALSE, prob = p.mat[t, ])
  }

  x <- do.call(obs.model, c(list(u = u, z = z), obs.pars))

  list(
    u = u,
    z = z,
    x = x,
    p.mat = p.mat
  )
}

The initial hidden state is drawn from a multinomial distribution with one trial and event probabilities given by the initial state probability vector \(\mat{\pi}_1\). The transition probabilities for each of the following steps \(t \in \{2, \dots, T\}\) are generated from a multinomial regression with vector parameters \(\mat{w}_k\), one set per possible hidden state \(k \in \{1, \dots, K\}\), and covariates \(\mat{u}_t\). The hidden states are subsequently sampled based on these transition probabilities.

The observation at each step may generate from a linear regressions with parameters \(\mat{b}_k\) and \(\mat{\Sigma}_k\), one set per possible hidden state.

obsmodel_reg <- function(...) {
  args <- list(...)
  u <- args$u
  z <- args$z
  b <- args$b
  s <- args$s

  K <- length(unique(z))
  m <- ncol(u)

  if (any(dim(b) != c(K, m)))
    stop("The regressors matrix must be of size Kxm, where m is the size of the input vector.")

  T.length <- nrow(u)

  x <- vector("numeric", T.length)
  for (t in 1:T.length) {
    x[t] <- rnorm(1, u[t, ] %*% b[z[t], ], s[z[t]])
  }

  return(x)
}

Alternatively, the observation could originate in a Gaussian mixture density where \(\lambda_{kl}\) is the component weight for the \(l\)-th component within the \(k\)-th state, \(0 \le \lambda_{kl} \le 1 \ \forall \ l \in \{1, \dots, L\}, k \in \{1, \dots, K\}\) and \(\sum_{l=1}^{L}{\lambda_{kl}} = 1 \ \forall \ k\).

obsmodel_mix <- function(...) {
  args <- list(...)
  z <- args$z
  lambda <- args$lambda
  mu <- args$mu
  s <- args$s

  if (!all.equal(length(unique(z)), length(lambda), length(mu), length(s)))
    stop("The size of the vector parameters lambda, mu and s must equal to the
         number of different states.")

  T.length <- length(z)
  L <- ncol(lambda)

  x <- vector("numeric", T.length)
  for (t in 1:T.length) {
    l <- sample(1:L, 1, FALSE, prob = lambda[z[t], ])
    x[t] <- rnorm(1, mu[z[t], l], s[z[t], l])
  }

  return(x)
}

We set up our simulated experiments for a regression observation model with arbitrary values for all the involved parameters. Additionally, we define the settings for the Markov Chain Monte Carlo sampler.

# Data
T.length = 300
K = 3
M = 4
R = 1
u.intercept = FALSE
w = matrix(
  c(1.2, 0.5, 0.3, 0.1, 0.5, 1.2, 0.3, 0.1, 0.5, 0.1, 1.2, 0.1),
  nrow = K, ncol = M, byrow = TRUE)
b = matrix(
  c(5.0, 6.0, 7.0, 0.5, 1.0, 5.0, 0.1, -0.5, 0.1, -1.0, -5.0, 0.2),
  nrow = K, ncol = M, byrow = TRUE)
s = c(0.2, 1.0, 2.5)
p1 = c(0.4, 0.2, 0.4)

# Markov Chain Monte Carlo
n.iter = 400
n.warmup = 200
n.chains = 1
n.cores = 1
n.thin = 1
n.seed = 9000

We anticipate identification issues in our sampler given the nature of the model. As an arguable simplification with obvious drawbacks, we decide to rely only on one chain to avoid between-chain label switching of regression parameters. We refer the reader to Betancourt (2017) for an in-depth treatment of the diagnostics, causes and possible solutions for label switching in Bayesian Mixture Models.

We draw random inputs from a standard Gaussian distribution and generate the dataset accordingly.

set.seed(n.seed)
u <- matrix(rnorm(T.length*M, 0, 1), nrow = T.length, ncol = M)
if (u.intercept)
  u[, 1] = 1

dataset <- iohmm_sim(T.length, K, u, w, p1, obsmodel_reg, list(b = b, s = s))
plot_inputoutput(x = dataset$x, u = dataset$u, z = dataset$z)

We observe how the chosen values for the parameters affect the generated data. For example, the relationship between the third input \(\mat{u}_3\) and the output \(\mat{x}\) is positive, indifferent and negative for the hidden states 1 through 3, the true slopes being 7, 0.1 and -5 respectively.

plot_inputprob(u = dataset$u, p.mat = dataset$p.mat, z = dataset$z)

We then analyse the relationship between the input and the state probabilities, which are usually hidden in applications with real data. The pairs \(\{ \mat{u}_1, p(z_t = 1) \}\), \(\{ \mat{u}_2, p(z_t = 2) \}\) and \(\{ \mat{u}_3, p(z_t = 3) \}\) are most related due to the choice of values for the true regression parameters: those inputs take the largest weight in each state, namely \(w_{11} = 1.2\), \(w_{22} = 1.2\) and \(w_{33} = 1.2\).

2.2.2 Estimation

Adopting a fully Bayesian approach, we run our software to draw samples from the posterior density of model parameters and other hidden quantities.

library(rstan)
## Loading required package: ggplot2
## Loading required package: StanHeaders
## rstan (Version 2.14.2, packaged: 2017-03-19 00:42:29 UTC, GitRev: 5fa1e80eb817)
## For execution on a local, multicore CPU with excess RAM we recommend calling
## rstan_options(auto_write = TRUE)
## options(mc.cores = parallel::detectCores())
library(shinystan)
## Loading required package: shiny
## 
## This is shinystan version 2.3.0
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())

stan.model = '../iohmm-reg/stan/iohmm-reg.stan'
stan.data = list(
  T = T.length,
  K = K,
  M = M,
  u_tm = as.array(u),
  x_t = dataset$x
)

stan.fit <- stan(file = stan.model,
                 data = stan.data, verbose = T,
                 iter = n.iter, warmup = n.warmup,
                 thin = n.thin, chains = n.chains,
                 cores = n.cores, seed = n.seed)
## 
## TRANSLATING MODEL 'iohmm-reg' FROM Stan CODE TO C++ CODE NOW.
## successful in parsing the Stan model 'iohmm-reg'.
## 
## CHECKING DATA AND PREPROCESSING FOR MODEL 'iohmm-reg' NOW.
## 
## COMPILING MODEL 'iohmm-reg' NOW.
## 
## STARTING SAMPLER FOR MODEL 'iohmm-reg' NOW.
## 
## SAMPLING FOR MODEL 'iohmm-reg' NOW (CHAIN 1).
## 
## Chain 1, Iteration:   1 / 400 [  0%]  (Warmup)
## Chain 1, Iteration:  40 / 400 [ 10%]  (Warmup)
## Chain 1, Iteration:  80 / 400 [ 20%]  (Warmup)
## Chain 1, Iteration: 120 / 400 [ 30%]  (Warmup)
## Chain 1, Iteration: 160 / 400 [ 40%]  (Warmup)
## Chain 1, Iteration: 200 / 400 [ 50%]  (Warmup)
## Chain 1, Iteration: 201 / 400 [ 50%]  (Sampling)
## Chain 1, Iteration: 240 / 400 [ 60%]  (Sampling)
## Chain 1, Iteration: 280 / 400 [ 70%]  (Sampling)
## Chain 1, Iteration: 320 / 400 [ 80%]  (Sampling)
## Chain 1, Iteration: 360 / 400 [ 90%]  (Sampling)
## Chain 1, Iteration: 400 / 400 [100%]  (Sampling)
##  Elapsed Time: 99.975 seconds (Warm-up)
##                47.487 seconds (Sampling)
##                147.462 seconds (Total)
n.samples = (n.iter - n.warmup) * n.chains

2.2.3 Convergence

We rely on several diagnostic statistics and plots provided by rstan (Stan Development Team 2017a) and shinystan (Stan Development Team 2017b) to assess mixing, convergence and the inexistence of divergences. We then extract the samples for some quantities of interest, namely the filtered probabilities vector \(\mat{\alpha}_t\), the smoothed probability vector \(\mat{\gamma}_t\), the most probable hidden path \(\mat{z}^*\) and the fitted output \(\hat{x}\).

# MCMC Diagnostics --------------------------------------------------------
summary(stan.fit,
        pars = c('p_1k', 'w_km', 'b_km', 's_k'),
        probs = c(0.50))$summary
# launch_shinystan(stan.fit)

# Extraction --------------------------------------------------------------
alpha_tk <- extract(stan.fit, pars = 'alpha_tk')[[1]]
gamma_tk <- extract(stan.fit, pars = 'gamma_tk')[[1]]
zstar_t <- extract(stan.fit, pars = 'zstar_t')[[1]]
hatx_t  <- extract(stan.fit, pars = 'hatx_t')[[1]]
Estimated parameters and hidden quantities. MCSE = Monte Carlo Standard Error, SE = Standard Error, Med = Median, ESS = Effective Sample Size.
Mean MCSE SE Med ESS \(\hat{R}\)
p_1k[1] 0.24 0.01 0.20 0.20 200.00 1.00
p_1k[2] 0.27 0.02 0.21 0.23 200.00 1.00
p_1k[3] 0.49 0.02 0.24 0.47 200.00 1.00
w_km[1,1] -0.23 0.30 2.84 -0.33 89.03 1.00
w_km[1,2] 0.10 0.27 2.95 0.47 122.00 1.00
w_km[1,3] 0.39 0.29 2.66 0.46 84.81 1.02
w_km[1,4] -0.18 0.53 3.30 -0.17 38.12 1.01
w_km[2,1] -0.09 0.30 2.85 -0.19 88.67 1.00
w_km[2,2] 0.27 0.27 2.95 0.64 122.88 1.00
w_km[2,3] 0.32 0.29 2.65 0.36 84.84 1.02
w_km[2,4] -0.53 0.53 3.30 -0.54 38.28 1.01
w_km[3,1] -0.28 0.30 2.86 -0.38 90.44 1.00
w_km[3,2] 0.14 0.27 2.96 0.56 122.08 1.00
w_km[3,3] 0.17 0.29 2.65 0.15 86.48 1.02
w_km[3,4] -0.37 0.53 3.30 -0.43 38.08 1.01
b_km[1,1] 0.01 0.03 0.39 0.00 181.98 1.02
b_km[1,2] -1.05 0.02 0.29 -1.06 200.00 1.00
b_km[1,3] -4.91 0.03 0.36 -4.90 200.00 1.00
b_km[1,4] -0.09 0.02 0.33 -0.09 200.00 1.01
b_km[2,1] 0.86 0.01 0.13 0.86 200.00 1.00
b_km[2,2] 5.00 0.01 0.08 5.00 200.00 1.00
b_km[2,3] 0.15 0.01 0.10 0.15 200.00 1.00
b_km[2,4] -0.28 0.01 0.13 -0.28 200.00 1.00
b_km[3,1] 5.04 0.00 0.03 5.04 200.00 1.01
b_km[3,2] 6.01 0.00 0.02 6.01 200.00 1.01
b_km[3,3] 6.99 0.00 0.02 6.99 200.00 1.00
b_km[3,4] 0.46 0.00 0.03 0.47 200.00 1.00
s_k[1] 2.74 0.01 0.21 2.74 200.00 1.00
s_k[2] 1.00 0.01 0.09 0.99 200.00 1.00
s_k[3] 0.21 0.00 0.02 0.21 200.00 1.00

While mixing and convergence is extremely efficient, as expected when dealing with generated data, we note that the regression parameters for the latent states are the worst performers. The small effective size translates into a high Monte Carlo standard error and broader credibility intervals. One possible reason is that softmax is invariant to change in location, thus the parameters of a multinomial regression do not have a natural center and become harder to estimate.

2.2.4 State probability

We assess that our software recover the hidden states correctly. Due to label switching, the states generated under the labels 1 through 3 were recovered in inverse order. In consequence, we decide to relabel the observations based on the best fit. This would not prove to be a problem with real data considering that the hidden states are never observed.

# Relabelling (ugly hack edition) -----------------------------------------
dataset$zrelab <- rep(0, T)

hard <- sapply(1:T.length, function(t, med) {
  which.max(med[t, ])
}, med = apply(alpha_tk, c(2, 3),
                    function(x) {
                      quantile(x, c(0.50)) }))

tab <- table(hard = hard, original = dataset$z)

for (k in 1:K) {
  dataset$zrelab[which(dataset$z == k)] <- which.max(tab[, k])
}

print("Label re-imputation (relabeling due to switching labels)")
## [1] "Label re-imputation (relabeling due to switching labels)"
table(new = dataset$zrelab, original = dataset$z)
##    original
## new   1   2   3
##   1   0   0 102
##   2   0 108   0
##   3  90   0   0

Point estimates and credibility intervals are provided by rstan’s {r}summary function.

print("Estimated initial state probabilities")
summary(stan.fit,
        pars = c('p_1k'),
        probs = c(0.10, 0.50, 0.90))$summary[, c(1, 3, 4, 5, 6)]

print("Estimated probabilities in the transition matrix")
head(summary(stan.fit,
        pars = c('A_ij'),
        probs = c(0.10, 0.50, 0.90))$summary[, c(1, 3, 4, 5, 6)])

print("Estimated regressors of hidden states")
summary(stan.fit,
        pars = c('w_km'),
        probs = c(0.10, 0.50, 0.90))$summary[, c(1, 3, 4, 5, 6)]

print("Estimated regressors and standard deviation of observations in each state")
summary(stan.fit,
        pars = c('b_km', 's_k'),
        probs = c(0.10, 0.50, 0.90))$summary[, c(1, 3, 4, 5, 6)]
Estimated initial state probabilities.
Mean SE 10% Med 90%
p_1k[1] 0.24 0.20 0.03 0.20 0.52
p_1k[2] 0.27 0.21 0.03 0.23 0.55
p_1k[3] 0.49 0.24 0.16 0.47 0.85
Estimated probabilities in the transition matrix.
Mean SE 10% Med 90%
A_ij[1,1] 0.24 0.20 0.03 0.20 0.52
A_ij[1,2] 0.27 0.21 0.03 0.23 0.55
A_ij[1,3] 0.49 0.24 0.16 0.47 0.85
A_ij[2,1] 0.33 0.02 0.31 0.33 0.36
A_ij[2,2] 0.36 0.02 0.33 0.36 0.39
A_ij[2,3] 0.31 0.02 0.29 0.31 0.33
Estimated regressors of hidden states.
Mean SE 10% Med 90%
w_km[1,1] -0.23 2.84 -3.50 -0.33 3.30
w_km[1,2] 0.10 2.95 -3.56 0.47 3.64
w_km[1,3] 0.39 2.66 -3.27 0.46 3.51
w_km[1,4] -0.18 3.30 -4.51 -0.17 4.22
w_km[2,1] -0.09 2.85 -3.57 -0.19 3.43
w_km[2,2] 0.27 2.95 -3.49 0.64 3.84
w_km[2,3] 0.32 2.65 -3.23 0.36 3.52
w_km[2,4] -0.53 3.30 -4.89 -0.54 3.83
w_km[3,1] -0.28 2.86 -3.69 -0.38 3.36
w_km[3,2] 0.14 2.96 -3.65 0.56 3.68
w_km[3,3] 0.17 2.65 -3.46 0.15 3.37
w_km[3,4] -0.37 3.30 -4.72 -0.43 4.01
Estimated regression parameters and standard deviation of observations in each state.
Mean SE 10% Med 90%
b_km[1,1] 0.01 0.39 -0.44 0.00 0.54
b_km[1,2] -1.05 0.29 -1.41 -1.06 -0.65
b_km[1,3] -4.91 0.36 -5.33 -4.90 -4.48
b_km[1,4] -0.09 0.33 -0.52 -0.09 0.31
b_km[2,1] 0.86 0.13 0.68 0.86 1.02
b_km[2,2] 5.00 0.08 4.90 5.00 5.10
b_km[2,3] 0.15 0.10 0.03 0.15 0.28
b_km[2,4] -0.28 0.13 -0.44 -0.28 -0.11
b_km[3,1] 5.04 0.03 5.00 5.04 5.07
b_km[3,2] 6.01 0.02 5.98 6.01 6.04
b_km[3,3] 6.99 0.02 6.97 6.99 7.02
b_km[3,4] 0.46 0.03 0.43 0.47 0.50
s_k[1] 2.74 0.21 2.50 2.74 3.02
s_k[2] 1.00 0.09 0.90 0.99 1.13
s_k[3] 0.21 0.02 0.19 0.21 0.23

The samples drawn from the posterior density are reasonably close to the true values. We additionally check the estimated hidden quantities.

plot_stateprobability(alpha_tk, gamma_tk, 0.8, dataset$zrelab)

print("Estimated hidden states (hard naive classification using filtered probabilities)")
print(table(
  estimated = apply(round(apply(alpha_tk, c(2, 3),
                                function(x) {
                                  quantile(x, c(0.50)) })), 1, which.max),
  real = dataset$zrelab))
Hard classification.
Real 1 Real 2 Real 3
1 89 4 1
2 10 100 1
3 3 4 88

The filtered probability is overly accurate, as expected, making the additional information contained in the smoothed seem of little value. The confusion matrix makes evident that, under ideal conditions, the sampler works as intended. Similarly, the Viterbi algorithm recovers the expected most probably hidden state.

plot_statepath(zstar_t, dataset$zrelab)

print("Estimated hidden states for the jointly most likely path (Viterbi decoding)")
round(table(
  actual = rep(dataset$zrelab, each = n.samples),
  fit = zstar_t) / n.samples, 0)
Viterbi decoding.
1 2 3
89 4 1
10 100 1
3 4 88

2.2.5 Fitted output

plot_outputfit(dataset$x, hatx_t, z = dataset$zrelab, TRUE)

All in all, we are confident that the Stan code works as specified and we find model calibration satisfactory.


3 Stock Market Forecasting Using Hidden Markov Model

3.1 Preamble

We first load the packages and source auxiliary functions.

library(quantmod)
library(rstan)
library(shinystan)
source('R/data.R')
source('R/forecast.R')
source('R/wf-forecast.R')
source('../common/R/math.R')
source('../common/R/plots.R')
source('../iohmm-mix/R/iohmm-mix-init.R')

3.2 Model

We adhere to the methodology proposed in the original work as much as possible. We set up an IOHMM model with \(K = 4\) hidden states to forecast stock prices. The hidden dynamics are driven by a multinomial (softmax) regression with \(R = 4\) inputs, namely opening, closing, highest and lowest prices from the previous time step. The univariate output is the stock closing price for the current time step. The observation model is a Gaussian mixture with \(L = 3\) components per state,

\[ p(x_t | z_{t} = j) = \sum_{l = 1}^{L}{\lambda_{jl} \ \NN(x_t | \mu_{jl}, \sigma_{jl})}, \]

where \(\lambda_{jl}\) is the component weight for the \(l\)-th component within the \(j\)-th state, \(0 \le \lambda_{jl} \le 1 \ \forall \ l, j\) and \(\sum_{l=1}^{L}{\lambda_{jl}} = 1 \ \forall \ j\).

The vector parameter then becomes \(\mat{\theta} = (\mat{\pi}_1, \mat{w}_k, \lambda_{kl}, \mu_{kl}, \sigma_{kl})\) for \(k \in \{1, \dots, K\}\) and \(l \in \{1, \dots, L\}\).

K = 4
L = 3

3.3 The sampler

We use Stan’s Hamiltonian Monte Carlo to run a fully Bayesian estimation of the model parameters. Additionally, we compute many hidden quantities, including the forward probability, the future likelihood, the jointly most likely path and transition probabilities. We draw samples from the hidden state distribution as well. To this end, we write our own implementation of the forward filter, the forwards-backwards filter and the Viterbi decoding algorithms. We acknowledge the authors of the Stan Manual (Team 2017) for the numerous examples, some of which served as a starting point for our code.

We define the following parameters for the MCMC sampler. The chosen hyperparameters correspond to diffuse priors.

hyperparams <- c(0, 5, 5, 0, 3, 1, 1, 0, 5);

n.iter = 800
n.warmup = 400
n.chains = 1
n.cores = 1
n.thin = 1
n.seed = 9000

3.4 Dataset & data transformation

We use daily OHLC prices for the stocks and time spans listed in the original paper. The data is downloaded from Yahoo Finance via Quantmod. We omit two delisted stocks as the series are not avaliable anymore in public sources such as Yahoo and Google Finance.

syms <- data.frame(
  symbol    = c("LUV", "RYA.L"),
  name      = c("Southwest Airlines Co", "Ryanair Holdings Plc"),
  train.from = c("2002-12-18", "2003-05-06"),
  train.to   = c("2004-07-23", "2004-12-06"),
  test.from  = c("2004-07-24", "2004-12-07"),
  test.to    = c("2004-11-17", "2005-03-17"),
  src        = c("yahoo", "yahoo"),
  stringsAsFactors = FALSE)
Details of the dataset.
Symbol Name Training set Test set
LUV Southwest Airlines Co 2002-12-18 to 2004-07-23 2004-07-24 to 2004-11-17
RYA.L Ryanair Holdings Plc 2003-05-06 to 2004-12-06 2004-12-07 to 2005-03-17

We found that feeding the raw dataset to our sampler led to convergence issues. We learnt by trial and error that centering and rescaling the variables did not not only improve convergence but also sped up the software by a factor of 5.

make_dataset <- function(prices, scale = TRUE) {
  T.length = nrow(prices)

  x = as.vector(Cl(prices))[2:T.length]
  u = as.matrix(prices)[1:(T.length - 1), 1:4]

  dataset <- list(
    x = as.vector(x),
    u = as.matrix(u),
    x.unscaled = as.vector(x),
    u.unscaled = as.matrix(u),
    x.center = 0,
    x.scale  = 1,
    u.center = 0,
    u.scale  = 1
  )

  if (scale) {
    x <- scale(x, TRUE, TRUE)
    u <- scale(u, TRUE, TRUE)

    dataset$x <- as.vector(x)
    dataset$u <- as.matrix(u)
    dataset$x.center <- attr(x, "scaled:center")
    dataset$x.scale  <- attr(x, "scaled:scale")
    dataset$u.center <- attr(u, "scaled:center")
    dataset$u.scale  <- attr(u, "scaled:scale")
  }

  dataset
}

3.5 Methodology

The goal of the experiment is to predict the closing price for a specific stock market share. According to the proposed methodology, we estimate the model parameters \(\mat{\hat{\theta}}\) using all the information available up to a given step. We compute the likelihood of the last observation in the training set conditional on the trained parameters \(p(\mat{x}_{t} | \mat{\hat{\theta}})\) and we locate the observation in the training set with the nearest likelihood,

\[ \mat{x}_{t^*} = \argmin_{s} \ \left| p(\mat{x}_{t} | \mat{\hat{\theta}}) - p(\mat{x}_{s} | \mat{\hat{\theta}}) \right|. \]

We compute the change in the closing price from the located time step to the next one,

\[ \Delta_{t^*} = \mat{x}_{t^*+1} - \mat{x}_{t^*}, \]

and build our forecast assuming that a similar pattern will repeat

\[ \mat{\hat{x}}_{t+1} = \mat{x}_{t} + \Delta_{t^*}. \]

neighbouring_forecast <- function(x, oblik_t, h = 1, threshold = 0.05) {
  if (!is.vector(x) || length(x) != dim(oblik_t)[2])
    stop("The size of the observation vector and the width of the likelihood
         array must be equal.")

  n.samples <- dim(oblik_t)[1]
  T.length  <- dim(oblik_t)[2]

  find_closest <- function(target, candidates, threshold) {
    ind <- which(abs(target - candidates) < abs(target) * threshold)

    if (!length(ind))
      ind <- which(abs(target - candidates) == min(abs(target - candidates)))

    ind
  }

  forecast = vector("numeric", n.samples)
  for (n in 1:n.samples) {
    oblik_target <- oblik_t[n, T.length]
    oblik_cand   <- oblik_t[n, 1:(T.length - h)]

    closests <- find_closest(oblik_target, oblik_cand, threshold)
    d        <- abs(oblik_target - oblik_t[n, closests])
    w        <- exp(d)

    forecast[n] <- x[T.length] + sum((x[closests + h] - x[closests]) * w) / sum(w)
  }

  forecast
}

Although the authors state that one or more observations in the training dataset are selected, the paper provides no details about the determination of the number, the selection and the combination of the neighboring observations. In our implementation, we locate all the observations whose estimated likelihood is within the interval that covers \(\pm 5º%\) of the target observation likelihood.

\[ \left\{ \mat{x}_{s} : \ \left| p(\mat{x}_{t} | \mat{\hat{\theta}}) - p(\mat{x}_{s} | \mat{\hat{\theta}}) \right| < 0.05 \times p(\mat{x}_{t} | \mat{\hat{\theta}}) \right\}. \]

If the set is empty, we use the past observation with the most similar likelihood instead.

The procedure is run independently for each stock.

3.6 Southwest Airlines (LUV)

We present an in-depth study of one stock, LUV, to assess the strengths and weaknesses of the model. We later present a brief analysis of RYA.L.

3.6.1 Data exploration

symbol <- syms[1, ] # LUV
prices   <- getSymbols(symbol$symbol,
                       env  = NULL,
                       from = symbol$train.from,
                       to   = symbol$test.to,
                       src  = symbol$src)
T.length <- nrow(prices[paste(symbol$train.from, symbol$train.to, sep = "/")])
dataset <- make_dataset(prices[1:T.length, ], TRUE)

plot_inputoutput(x = dataset$x, u = dataset$u,
                 x.label = symbol$symbol,
                 u.label = c("Open", "High", "Low", "Close"))

Unsuprisingly, OHLC prices follow an almost identical trend and are highly correlated with the next day closing price. Nonetheless, we stress that the correlation of two non-stationary variables is an unreliable measure. Althought it is customary to take the first difference or apply other transformations to achieve non-stationarity, the price level is purposely retained in the input series.

3.6.2 Estimation

We attempt to ease convergence by setting up approximated initial values for the component parameters. Although nested k-means is not truthful to the model we replicate, we still find this simplification worthwhile.

rstan_options(auto_write = TRUE)

stan.model = '../iohmm-mix/stan/iohmm-hmix.stan'
stan.data = list(
  K = K,
  L = L,
  M = ncol(dataset$u),
  T = length(dataset$x),
  u_tm = as.array(dataset$u),
  x_t = as.vector(dataset$x),
  hyperparams = as.array(hyperparams)
)

stan.fit <- stan(file = stan.model,
                 data = stan.data, verbose = T,
                 iter = n.iter, warmup = n.warmup,
                 thin = n.thin, chains = n.chains,
                 cores = n.cores, seed = n.seed,
                 init = function() {init_fun(stan.data)})
## 
## TRANSLATING MODEL 'iohmm-hmix' FROM Stan CODE TO C++ CODE NOW.
## successful in parsing the Stan model 'iohmm-hmix'.
## 
## CHECKING DATA AND PREPROCESSING FOR MODEL 'iohmm-hmix' NOW.
## 
## COMPILING MODEL 'iohmm-hmix' NOW.
## 
## STARTING SAMPLER FOR MODEL 'iohmm-hmix' NOW.
## 
## SAMPLING FOR MODEL 'iohmm-hmix' NOW (CHAIN 1).
## 
## Chain 1, Iteration:   1 / 800 [  0%]  (Warmup)
## Chain 1, Iteration:  80 / 800 [ 10%]  (Warmup)
## Chain 1, Iteration: 160 / 800 [ 20%]  (Warmup)
## Chain 1, Iteration: 240 / 800 [ 30%]  (Warmup)
## Chain 1, Iteration: 320 / 800 [ 40%]  (Warmup)
## Chain 1, Iteration: 400 / 800 [ 50%]  (Warmup)
## Chain 1, Iteration: 401 / 800 [ 50%]  (Sampling)
## Chain 1, Iteration: 480 / 800 [ 60%]  (Sampling)
## Chain 1, Iteration: 560 / 800 [ 70%]  (Sampling)
## Chain 1, Iteration: 640 / 800 [ 80%]  (Sampling)
## Chain 1, Iteration: 720 / 800 [ 90%]  (Sampling)
## Chain 1, Iteration: 800 / 800 [100%]  (Sampling)
##  Elapsed Time: 1343.81 seconds (Warm-up)
##                1454.28 seconds (Sampling)
##                2798.09 seconds (Total)
## 
## [1] "The following numerical problems occurred the indicated number of times on chain 1"
##                                                                                 count
## Exception thrown at line 57: normal_log: Scale parameter is 0, but must be > 0!     1
## [1] "When a numerical problem occurs, the Hamiltonian proposal gets rejected."
## [1] "See http://mc-stan.org/misc/warnings.html#exception-hamiltonian-proposal-rejected"
## [1] "If the number in the 'count' column is small,  there is no need to ask about this message on stan-users."
n.samples = (n.iter - n.warmup) * n.chains

Long computational times are a forewarning of poor convergence. This may be specific to the characteristics of the selected price series; despite the similar length, RYA.L performs significantly faster as shown below. As suggested by the warning message and the extended guidelines in the Stan website, the numerical problem is negligible and we do not take any further action.

3.6.3 Convergence

Graphical diagnoses (not shown) make evident that convergence to stationarity is at very least weak. Switching labels complicate the assessment since we cannot run and compare several chains as widely suggested in the standard literature. On the other hand, the number of iterations chosen seems to be enough to allow convergence of averages as assessed by the cumulated averages curve. Nonetheless, this do not compensate the fragile convergence to stationarity.

Relying on the shrink factor of Gelman and Rubin (1992), we identify that the components mean, standard deviation and weight are the hardest quantities to converge. More worryingly, the effective sample size of some mixture parameters lies below 10% of the iteration number. Analogously, Monte Carlo standard error exceeds 10% of the estimated standard error. Although thinning could reduce autocorrelation, it has the computational (but not statistical) purpose of storing and running computations on large sampling sequences (Gelman, Shirley, and others 2011). The mean hyperparameters sample efficiently, a clear indication that the addition of a hierarchy is a pertinent improvement.

knitr::kable(summary(stan.fit,
        pars = c('p_1k', 'w_km', 'lambda_kl', 'hypermu_k', 'mu_kl', 's_kl'),
        probs = c(0.10, 0.50, 0.90))$summary,
        col.names = c("Mean", "MCSE", "SE", "10\\%", "Med", "90\\%", "ESS", "$\\hat{R}$"),
      digits = 2, caption = "Summary of the posterior draws from the distribution of the estimated parameters.")
Summary of the posterior draws from the distribution of the estimated parameters.
Mean MCSE SE 10% Med 90% ESS \(\hat{R}\)
p_1k[1] 0.21 0.01 0.18 0.02 0.18 0.47 400.00 1.00
p_1k[2] 0.19 0.01 0.16 0.02 0.15 0.41 262.71 1.00
p_1k[3] 0.21 0.01 0.17 0.03 0.17 0.45 374.47 1.00
p_1k[4] 0.39 0.01 0.20 0.13 0.39 0.66 400.00 1.00
w_km[1,1] -1.54 0.22 3.75 -6.19 -1.70 3.64 292.44 1.00
w_km[1,2] -2.11 0.32 4.67 -8.45 -2.12 4.15 214.32 1.00
w_km[1,3] -0.05 0.21 3.90 -4.72 -0.17 5.32 348.06 1.00
w_km[1,4] -5.15 0.30 4.38 -11.11 -5.19 -0.20 216.27 1.00
w_km[2,1] 1.70 0.23 3.65 -3.01 1.86 5.88 241.57 1.00
w_km[2,2] 4.35 0.23 3.43 0.12 4.15 8.87 227.35 1.02
w_km[2,3] -0.26 0.22 3.58 -4.95 -0.27 3.94 256.77 1.00
w_km[2,4] 7.98 0.20 3.40 3.53 8.02 12.40 290.58 1.00
w_km[3,1] -1.57 0.24 4.03 -6.72 -1.42 3.40 273.03 1.00
w_km[3,2] 0.37 0.35 4.35 -5.61 0.64 5.58 158.19 1.01
w_km[3,3] 5.48 0.23 3.97 0.27 5.53 10.33 287.84 1.01
w_km[3,4] 5.82 0.22 3.86 0.68 6.10 10.92 305.44 1.00
w_km[4,1] 1.09 0.22 3.42 -3.07 1.01 5.76 245.17 1.00
w_km[4,2] -2.20 0.25 3.96 -7.17 -2.36 3.22 241.85 1.00
w_km[4,3] -4.37 0.18 3.69 -9.14 -4.44 0.36 400.00 1.00
w_km[4,4] -8.38 0.22 3.65 -12.98 -8.77 -3.37 287.61 1.00
lambda_kl[1,1] 0.02 0.00 0.02 0.00 0.02 0.05 400.00 1.00
lambda_kl[1,2] 0.15 0.12 0.32 0.00 0.02 0.93 7.13 1.17
lambda_kl[1,3] 0.83 0.12 0.32 0.03 0.95 0.99 7.12 1.17
lambda_kl[2,1] 0.22 0.10 0.24 0.00 0.08 0.56 5.76 1.29
lambda_kl[2,2] 0.37 0.08 0.21 0.03 0.44 0.60 6.77 1.21
lambda_kl[2,3] 0.41 0.03 0.12 0.23 0.42 0.54 22.34 1.04
lambda_kl[3,1] 0.56 0.28 0.46 0.01 0.93 0.98 2.70 2.78
lambda_kl[3,2] 0.39 0.26 0.46 0.00 0.04 0.98 3.09 2.29
lambda_kl[3,3] 0.05 0.03 0.16 0.00 0.02 0.05 31.06 1.03
lambda_kl[4,1] 0.29 0.01 0.09 0.19 0.29 0.40 169.03 1.01
lambda_kl[4,2] 0.60 0.06 0.21 0.20 0.67 0.77 11.05 1.17
lambda_kl[4,3] 0.10 0.07 0.22 0.00 0.01 0.58 9.89 1.21
hypermu_k[1] -4.02 0.32 2.24 -6.92 -3.87 -1.37 50.45 1.03
hypermu_k[2] -0.72 0.08 1.66 -2.78 -0.84 1.47 400.00 1.01
hypermu_k[3] 1.22 0.09 1.75 -0.92 1.16 3.48 400.00 1.02
hypermu_k[4] 3.05 0.10 1.98 0.67 2.92 5.57 400.00 1.00
mu_kl[1,1] -7.45 0.53 3.90 -12.82 -7.13 -2.64 54.00 1.01
mu_kl[1,2] -3.19 0.55 2.92 -7.46 -2.53 -0.16 27.84 1.03
mu_kl[1,3] 0.30 0.42 1.48 -0.18 -0.13 1.07 12.55 1.09
mu_kl[2,1] -0.70 0.79 2.25 -4.48 0.64 0.76 8.12 1.26
mu_kl[2,2] 0.94 0.10 0.27 0.71 0.80 1.39 7.08 1.21
mu_kl[2,3] 1.62 0.01 0.10 1.51 1.62 1.74 43.27 1.03
mu_kl[3,1] 0.07 0.06 0.13 -0.12 0.14 0.17 4.21 1.54
mu_kl[3,2] 1.58 0.76 2.02 0.14 0.56 4.70 7.08 1.36
mu_kl[3,3] 5.13 0.47 3.25 1.16 4.76 9.86 47.73 1.07
mu_kl[4,1] -1.55 0.01 0.14 -1.72 -1.56 -1.35 152.46 1.00
mu_kl[4,2] -0.74 0.04 0.16 -0.90 -0.69 -0.63 14.24 1.13
mu_kl[4,3] 3.58 0.70 4.35 -0.64 2.40 9.80 38.90 1.04
s_kl[1,1] 2.34 0.09 1.70 0.33 2.02 4.76 400.00 1.00
s_kl[1,2] 2.07 0.28 1.92 0.10 1.63 4.98 46.86 1.02
s_kl[1,3] 0.31 0.20 0.93 0.09 0.11 0.40 21.85 1.05
s_kl[2,1] 1.14 0.37 1.55 0.15 0.28 3.43 17.81 1.11
s_kl[2,2] 0.30 0.04 0.20 0.16 0.25 0.51 25.76 1.05
s_kl[2,3] 0.28 0.01 0.06 0.21 0.27 0.36 44.37 1.01
s_kl[3,1] 1.31 0.91 1.88 0.09 0.11 4.31 4.25 1.55
s_kl[3,2] 1.36 0.63 1.62 0.09 0.61 3.64 6.70 1.39
s_kl[3,3] 2.36 0.10 1.95 0.26 1.85 5.26 400.00 1.01
s_kl[4,1] 0.33 0.01 0.09 0.22 0.32 0.44 187.86 1.00
s_kl[4,2] 0.29 0.02 0.28 0.20 0.25 0.34 197.85 1.02
s_kl[4,3] 1.73 0.38 1.84 0.16 1.03 4.29 22.94 1.06

In general terms, convergence is just about satisfactory. As shown in the following sections, the fitted values are roughly consistent with the observations thus hinting that the sampler adapts to the data even in the cases where diagnostics yield mixed results. Nonetheless, we interpret the results carefully and with skepticism. The fact that mixture parameters are the most ill-behaved leads us to the hypothesis that the observation model would improve significantly with better domain knowledge. Furthermore, we suspect that setting the same number of hidden states for all the price series analysed in the original paper may prove excessively arbitrary and hinder sampling performance.

We extract sampled hidden quantities for later use.

alpha_tk <- extract(stan.fit, pars = 'alpha_tk')[[1]]
gamma_tk <- extract(stan.fit, pars = 'gamma_tk')[[1]]
zstar_t  <- extract(stan.fit, pars = 'zstar_t')[[1]]
hatx_t   <- extract(stan.fit, pars = 'hatx_t')[[1]]
oblik_t  <- extract(stan.fit, pars = 'oblik_t')[[1]]
logA_ij  <- extract(stan.fit, pars = 'logA_ij')[[1]]

3.6.4 State probability

The following plot shows the probability, along with its 80% credible interval, that an observation belongs to one of the possible hidden states.

plot_stateprobability(alpha_tk, gamma_tk, 0.8)

Since states are determined by the previous step OHLC prices, filtered and smoothed probabilities are equivalent. We note that state membership naturally became persistent. Given the choice of input variables, state membership depends non-linearly on price levels and will remain stable so long as price changes are stationary. In some contexts, persistence may be regarded as a very valuable feature because latent states are expected to change disruptively yet sporadically. Conversely, Markov Switching dynamics attempt to accommodate latent states in a way that they may shift more often than domain knowledge suggests, which in turns may lead to overfitting. The jointly most probably hidden path, as decoded by the Viterbi algorithm, reflects the continuity of states across time.

plot_statepath(zstar_t)

It is interesting to study in detail in what way the price level at a given day informs the closing price for the following time step. As expected, the softmax transformation establishes a non-linear relationship between OHLC prices and the membership probability. The latent variables have a natural and intuitive interpretation. States 2 and 3 represent the upper price levels and contain the observations that are considerably and slightly above the mean. The analogous is also true for the states 3 and 4. Following this intuition, the number of possible states \(K\) should be chosen according to the observed number of levels in the input series. We notice that the different inputs have a similar impact on the state probability due to their high cross-correlatation. Even though the thresholds are estimated by maximizing the log-posterior density, out of sample error measures would be used if forecasting were the main goal. We expand on this in the discussion.

plot_inputprob(dataset$u.unscaled,
               apply(logA_ij, c(2, 3), function(x) { median(exp(x)) }),
               u.label = c("Open", "High", "Low", "Close"))

3.6.5 Fitted output

The following plot shows the fitted output. We first draw the hidden state according to the probabilities estimated by the softmax regression applied on the previous day OHLC vector. We draw one component from such state, then we draw an observation from a Gaussian distribution with the corresponding mean and standard deviation.

plot_outputfit(dataset$x, hatx_t, z = apply(zstar_t, 2, median), K = K)

It is important to note that fitted observation is not the forecast itself. The methodology proposed by the original authors rely on the likelihood of the observations instead.

3.6.6 In-sample summary

All in all, the in-sample fit can be summarized in the following plot.

plot_inputoutputprob(x = dataset$x, u = dataset$u,
                     stateprob = alpha_tk, zstar = zstar_t,
                     x.label = symbol$symbol,
                     u.label = c("Open", "High", "Low", "Close"),
                     stateprob.label = bquote("Filtered prob" ~ p(z[t] ~ "|" ~ x[" " ~ 1:t])))

3.6.7 Forecast

The methodology produces a forecast based on the estimated likelihood of the last observation. We write a walk forward procedure where the model is refitted each time a new observation is received. Since Stan does not have a natural way to update the log-density from a previous run, we implement a reduced version of our software that includes only the minimum computations needed for forecasting.

The following snippet runs the walk forward procedure in parallel and returns a list of objects.

luv.wf <- wf_forecast(syms = symbol, K, L, hyperparams,
                        n.iter, n.warmup, n.chains, n.cores, n.thin, n.seed,
                        cache.dir = "fore_cache/")[[1]]

luv.300904 <- luv.wf[[49]]

3.6.7.1 Forecast for 30-sep-2004

We first recreate the forecast for 30-sep-2004. Our OHLC values differ from those shown in the original work, possibly due to adjustments. We regret that the authors do not provide enough details about data sources and pre-processing.

OHLC values for the selected dates.
Open High Low Close
2004-09-29 14.50 14.69 14.16 13.66
2004-09-30 14.36 14.46 14.21 13.62

We arbitrarily choose one of the samples generated by MCMC. We identify similar past observations and produce the point forecast accordingly.

oblik_t   <- luv.300904$oblik_t
n.samples <- dim(oblik_t)[1]
T.length  <- dim(oblik_t)[2]
h         <- 1
n         <- 1 # Sample

find_closest <- function(target, candidates, threshold) {
  ind <- which(abs(target - candidates) < abs(target) * threshold)

  if (!length(ind))
    ind <- which(abs(target - candidates) == min(abs(target - candidates)))

  ind
}

oblik_target <- oblik_t[n, T.length]
oblik_cand   <- oblik_t[n, 1:(T.length - h)]
threshold    <- 0.05

closests <- find_closest(oblik_target, oblik_cand, threshold)
d        <- abs(oblik_target - oblik_t[n, closests])
w        <- exp(d)
OHLC values for the selected dates.
Open High Low Close Delta Weight
2003-08-18 17.90 17.98 17.83 17.00 0.05 0.13
2003-10-15 19.71 19.98 19.51 18.60 0.02 0.13
2004-03-09 15.11 15.11 14.69 14.10 -0.25 0.12
2004-03-24 14.07 14.25 14.03 13.38 0.46 0.13
2004-06-25 16.67 17.91 16.62 17.00 -0.36 0.12
2004-07-14 16.12 16.28 15.86 15.06 -0.31 0.12
2004-08-17 15.28 15.57 15.22 14.47 0.28 0.12
2004-09-27 14.40 14.46 13.95 13.34 0.50 0.13
luv.300904.y     <- luv.300904$dataset$u.unscaled[, 4]
luv.300904.delta <- luv.300904$dataset$u.unscaled[closests + h, 4] - luv.300904$dataset$u.unscaled[closests, 4]
luv.300904.yhat  <- luv.300904$dataset$u.unscaled[T.length, 4] + sum(luv.300904.delta * w) / sum(w)

The difference between the observed closing price \(x_{t+1} = 13.62\) and the forecast \(\hat{x}_{t+1} = 13.67\) equal -0.05.

We can account for parameter uncertainty by repeating these computations for all the samples corresponding to a given time step. We stress that such a measure would not conform a true interval estimate as the methodology provides only with a point forecast.

3.6.7.2 Walking forward forecasts

We now show the walking forward forecasts for the selected stock.

T.ooslength  <- length(luv.wf)
luv.dataset  <- luv.wf[[T.ooslength]]$dataset
luv.wfy      <- tail(luv.dataset$x.unscaled, T.ooslength - 1)
luv.wfyhat   <- head(sapply(luv.wf, function(wf) { median(wf$forecast) }), -1)
plot_seqforecast(y = luv.wfy, yhat = luv.wfyhat,
                 main.label = symbol$symbol)

We report the out of sample forecast error measures in the table below. Both the coefficient of determination \(R^2\) and the Mean Absolute Percentage Error (MAPE) are consistent with the findings in the original paper.

Out of sample error measures for LUV.
MSE MAPE \(R^2\)
0.0792 1.57% 0.8689

3.7 Ryanair Holdings Plc (RYA.L)

3.7.1 In-sample analysis

We estimate the model for RYA.L and we then present the in-sample summary.

symbol <- syms[2, ] # RYA
prices   <- getSymbols(symbol$symbol,
                       env  = NULL,
                       from = symbol$train.from,
                       to   = symbol$test.to,
                       src  = symbol$src)
T.length <- nrow(prices[paste(symbol$train.from, symbol$train.to, sep = "/")])
dataset <- make_dataset(prices[1:T.length, ], TRUE)
rstan_options(auto_write = TRUE)

stan.model = '../iohmm-mix/stan/iohmm-hmix.stan'
stan.data = list(
  K = K,
  L = L,
  M = ncol(dataset$u),
  T = length(dataset$x),
  u_tm = as.array(dataset$u),
  x_t = as.vector(dataset$x),
  hyperparams = as.array(hyperparams)
)

stan.fit <- stan(file = stan.model,
                 data = stan.data, verbose = T,
                 iter = n.iter, warmup = n.warmup,
                 thin = n.thin, chains = n.chains,
                 cores = n.cores, seed = n.seed,
                 init = function() {init_fun(stan.data)})
## 
## TRANSLATING MODEL 'iohmm-hmix' FROM Stan CODE TO C++ CODE NOW.
## successful in parsing the Stan model 'iohmm-hmix'.
## 
## CHECKING DATA AND PREPROCESSING FOR MODEL 'iohmm-hmix' NOW.
## 
## COMPILING MODEL 'iohmm-hmix' NOW.
## 
## STARTING SAMPLER FOR MODEL 'iohmm-hmix' NOW.
## 
## SAMPLING FOR MODEL 'iohmm-hmix' NOW (CHAIN 1).
## 
## Chain 1, Iteration:   1 / 800 [  0%]  (Warmup)
## Chain 1, Iteration:  80 / 800 [ 10%]  (Warmup)
## Chain 1, Iteration: 160 / 800 [ 20%]  (Warmup)
## Chain 1, Iteration: 240 / 800 [ 30%]  (Warmup)
## Chain 1, Iteration: 320 / 800 [ 40%]  (Warmup)
## Chain 1, Iteration: 400 / 800 [ 50%]  (Warmup)
## Chain 1, Iteration: 401 / 800 [ 50%]  (Sampling)
## Chain 1, Iteration: 480 / 800 [ 60%]  (Sampling)
## Chain 1, Iteration: 560 / 800 [ 70%]  (Sampling)
## Chain 1, Iteration: 640 / 800 [ 80%]  (Sampling)
## Chain 1, Iteration: 720 / 800 [ 90%]  (Sampling)
## Chain 1, Iteration: 800 / 800 [100%]  (Sampling)
##  Elapsed Time: 473.186 seconds (Warm-up)
##                110.223 seconds (Sampling)
##                583.409 seconds (Total)
n.samples = (n.iter - n.warmup) * n.chains

alpha_tk <- extract(stan.fit, pars = 'alpha_tk')[[1]]
gamma_tk <- extract(stan.fit, pars = 'gamma_tk')[[1]]
zstar_t  <- extract(stan.fit, pars = 'zstar_t')[[1]]
hatx_t   <- extract(stan.fit, pars = 'hatx_t')[[1]]
oblik_t  <- extract(stan.fit, pars = 'oblik_t')[[1]]
logA_ij  <- extract(stan.fit, pars = 'logA_ij')[[1]]

We identify two extreme observations but we decide not to take any further action. State 2 represents a period of time in which the stock suddenly reached the maximum value in its history. As observed before, the hidden states are persistent.

knitr::kable(summary(stan.fit,
        pars = c('p_1k', 'w_km', 'lambda_kl', 'hypermu_k', 'mu_kl', 's_kl'),
        probs = c(0.10, 0.50, 0.90))$summary,
        col.names = c("Mean", "MCSE", "SE", "10\\%", "Med", "90\\%", "ESS", "$\\hat{R}$"),
      digits = 2, caption = "Summary of the posterior draws from the distribution of the estimated parameters.")
Summary of the posterior draws from the distribution of the estimated parameters.
Mean MCSE SE 10% Med 90% ESS \(\hat{R}\)
p_1k[1] 0.32 0.06 0.17 0.10 0.27 0.62 8.72 1.01
p_1k[2] 0.19 0.02 0.10 0.09 0.16 0.34 32.76 1.01
p_1k[3] 0.11 0.02 0.12 0.00 0.09 0.28 22.23 1.08
p_1k[4] 0.38 0.08 0.20 0.09 0.40 0.65 6.85 1.03
w_km[1,1] 1.74 0.83 2.38 -0.74 1.47 5.05 8.22 1.23
w_km[1,2] -15.79 0.64 2.37 -18.83 -16.12 -12.17 13.77 1.02
w_km[1,3] -0.51 0.94 2.26 -3.97 -0.10 2.47 5.72 1.00
w_km[1,4] 10.64 0.65 1.97 8.11 10.33 13.58 9.07 1.03
w_km[2,1] 1.78 0.91 2.61 -1.62 1.53 5.73 8.26 1.01
w_km[2,2] 13.15 1.12 2.70 9.27 13.42 16.98 5.83 1.34
w_km[2,3] 2.49 0.53 1.58 0.71 2.05 4.91 8.89 1.11
w_km[2,4] 8.75 1.02 2.96 3.69 9.23 12.11 8.38 1.00
w_km[3,1] -2.21 0.51 1.61 -4.75 -2.11 -0.25 10.09 1.05
w_km[3,2] -15.24 0.49 2.33 -17.73 -15.75 -11.90 23.01 1.04
w_km[3,3] 1.40 0.58 1.55 -0.69 1.53 3.29 7.08 1.02
w_km[3,4] -7.91 0.59 1.88 -9.84 -8.25 -5.44 10.27 1.01
w_km[4,1] -3.45 0.30 1.46 -5.25 -3.35 -1.75 22.87 1.00
w_km[4,2] 17.84 0.60 2.14 14.86 17.90 20.68 12.76 1.02
w_km[4,3] -0.98 0.49 1.47 -2.57 -1.19 0.91 8.92 1.11
w_km[4,4] -12.41 0.56 1.95 -14.29 -12.32 -10.26 12.04 1.12
lambda_kl[1,1] 0.01 0.00 0.01 0.00 0.01 0.03 25.52 1.00
lambda_kl[1,2] 0.83 0.06 0.14 0.66 0.86 0.97 5.56 1.47
lambda_kl[1,3] 0.16 0.06 0.14 0.01 0.13 0.32 5.41 1.49
lambda_kl[2,1] 0.08 0.01 0.04 0.04 0.07 0.13 14.77 1.11
lambda_kl[2,2] 0.64 0.06 0.15 0.43 0.66 0.83 5.77 1.01
lambda_kl[2,3] 0.28 0.07 0.15 0.10 0.25 0.48 5.26 1.04
lambda_kl[3,1] 0.62 0.03 0.11 0.46 0.61 0.76 9.59 1.08
lambda_kl[3,2] 0.37 0.04 0.11 0.23 0.38 0.53 8.80 1.09
lambda_kl[3,3] 0.01 0.00 0.01 0.00 0.00 0.02 13.90 1.01
lambda_kl[4,1] 0.09 0.00 0.02 0.07 0.09 0.12 32.48 1.03
lambda_kl[4,2] 0.90 0.00 0.02 0.87 0.90 0.92 24.96 1.00
lambda_kl[4,3] 0.01 0.00 0.01 0.00 0.00 0.02 5.46 1.28
hypermu_k[1] -2.55 0.50 1.63 -4.32 -3.17 -0.33 10.74 1.02
hypermu_k[2] 0.04 0.26 1.46 -1.71 -0.03 2.21 31.04 1.02
hypermu_k[3] 2.14 0.19 1.34 0.58 2.10 3.90 48.32 1.01
hypermu_k[4] 3.48 0.31 1.70 1.45 3.35 5.62 29.33 1.00
mu_kl[1,1] -1.14 0.24 0.88 -2.25 -1.06 -0.04 13.72 1.02
mu_kl[1,2] 0.18 0.01 0.03 0.14 0.18 0.22 16.76 1.15
mu_kl[1,3] 0.52 0.07 0.21 0.32 0.47 0.79 9.40 1.02
mu_kl[2,1] 0.91 0.12 0.30 0.61 0.86 1.49 6.22 1.17
mu_kl[2,2] 2.01 0.01 0.06 1.93 2.01 2.10 32.08 1.01
mu_kl[2,3] 2.53 0.12 0.27 2.18 2.58 2.85 5.15 1.01
mu_kl[3,1] -0.28 0.01 0.03 -0.32 -0.28 -0.24 6.04 1.01
mu_kl[3,2] -0.14 0.00 0.02 -0.17 -0.14 -0.11 49.02 1.09
mu_kl[3,3] 3.73 0.86 3.44 0.41 2.38 8.77 15.88 1.12
mu_kl[4,1] -1.24 0.00 0.01 -1.26 -1.24 -1.23 28.57 1.01
mu_kl[4,2] -0.67 0.01 0.02 -0.70 -0.67 -0.65 12.57 1.07
mu_kl[4,3] 8.00 0.70 3.38 3.32 7.54 12.54 23.54 1.12
s_kl[1,1] 2.33 0.51 1.51 0.54 1.82 4.49 8.88 1.04
s_kl[1,2] 0.14 0.01 0.03 0.11 0.14 0.18 10.80 1.21
s_kl[1,3] 0.12 0.03 0.06 0.06 0.10 0.19 3.66 1.77
s_kl[2,1] 0.90 0.61 1.49 0.07 0.22 3.69 5.91 1.23
s_kl[2,2] 0.26 0.02 0.05 0.19 0.26 0.34 11.63 1.00
s_kl[2,3] 0.33 0.04 0.09 0.20 0.36 0.43 7.27 1.06
s_kl[3,1] 0.10 0.00 0.01 0.09 0.10 0.12 9.40 1.01
s_kl[3,2] 0.06 0.00 0.02 0.04 0.06 0.09 68.74 1.00
s_kl[3,3] 2.39 0.29 1.62 0.58 2.06 4.56 30.21 1.04
s_kl[4,1] 0.07 0.00 0.02 0.06 0.07 0.09 43.74 1.01
s_kl[4,2] 0.18 0.00 0.01 0.16 0.18 0.20 13.31 1.00
s_kl[4,3] 1.71 0.41 1.57 0.22 1.12 4.25 14.80 1.08
plot_inputoutputprob(x = dataset$x, u = dataset$u,
                     stateprob = alpha_tk, zstar = zstar_t,
                     x.label = symbol$symbol,
                     u.label = c("Open", "High", "Low", "Close"),
                     stateprob.label = bquote("Filtered prob" ~ p(z[t] ~ "|" ~ x[" " ~ 1:t])))

3.7.2 Out-of-sample analysis

We produce the forecast using the same methodology and arrive at a similar accuracy level.

rya.wf <- wf_forecast(syms = symbol, K, L, hyperparams,
                        n.iter, n.warmup, n.chains, n.cores, n.thin, n.seed,
                        cache.dir = "fore_cache/")[[1]]

T.ooslength  <- length(rya.wf)
rya.dataset  <- rya.wf[[T.ooslength]]$dataset
rya.wfy      <- tail(rya.dataset$x.unscaled, T.ooslength - 1)
rya.wfyhat   <- head(sapply(rya.wf, function(wf) { median(wf$forecast) }), -1)
plot_seqforecast(y = rya.wfy, yhat = rya.wfyhat,
                 main.label = symbol$symbol)

Out of sample error measures for rya.
MSE MAPE \(R^2\)
1743.143 1.3% 0.9409

3.8 Discussion

3.8.1 The statistical model

In its simplest form, this model could be interpreted as \(K \times L\) means that switch according to previous price levels. We thus anticipated a highly multinomial, ill-behaved log-posterior that aggravates the frequent combinatorial non-identifiabilities inherent to mixture models. As our first naive and mostly unrestricted implementation could not sample efficiently from the log-posterior, we decided to add more information in the form of prior belief and additional model structure (within-state hierarchical mixture).

Even after all the improvements, we only arrive at a just about satisfactory yet imperfect convergence. We are confident that our implementation works as intended since we tested the model calibration by simulation, however, some computational issues arise when the sampler is fed with real data. More specifically, we find that sampling efficiency becomes extremely low for some of the parameters of the observation model.

In resemblance to the folk theorem1, which informally states that computational problems may be originated in model misspecification, we find that the output model proposed by the authors is unrepresentative of the features observed in the data.

3.8.2 The financial application

The patterns captured by the hidden states are consitent with the data under analysis. We believe a better choice of the number of states could improve the fit significantly. We find especially appealing that states are naturally persistent. On the other hand, the observation model seems simplistic to the extent that convergence is hindered. Forecasting methodology should be inspected during a larger time span before reaching any conclusion.

Letting inputs drive the hidden dynamics has a great potential for financial application. Although the authors proposed that price levels should drive the latent variables, the transition model is flexible enough to adapt to many other features a financial analyst may want to recreate. Possible candidate inputs for persistent processes are volume, volatility and trends. If persistence is undesired, trends should be removed from the input. For example, hidden dynamics may be driven by the residuals from a given model.


4 Original Computing Environment

writeLines(readLines(file.path(Sys.getenv("HOME"), ".R/Makevars")))
## 
## CXXFLAGS=-O3 -mtune=native -march=native -Wno-unused-variable -Wno-unused-function
devtools::session_info("rstan")
## Session info -------------------------------------------------------------
##  setting  value                       
##  version  R version 3.3.3 (2017-03-06)
##  system   x86_64, mingw32             
##  ui       RTerm                       
##  language (EN)                        
##  collate  Spanish_Argentina.1252      
##  tz       America/Buenos_Aires        
##  date     2017-07-02
## Packages -----------------------------------------------------------------
##  package      * version   date       source        
##  BH             1.62.0-1  2016-11-19 CRAN (R 3.3.2)
##  colorspace     1.3-2     2016-12-14 CRAN (R 3.3.3)
##  dichromat      2.0-0     2013-01-24 CRAN (R 3.3.2)
##  digest         0.6.12    2017-01-27 CRAN (R 3.3.3)
##  ggplot2      * 2.2.1     2016-12-30 CRAN (R 3.3.3)
##  graphics     * 3.3.3     2017-03-06 local         
##  grDevices    * 3.3.3     2017-03-06 local         
##  grid           3.3.3     2017-03-06 local         
##  gridExtra      2.2.1     2016-02-29 CRAN (R 3.3.3)
##  gtable         0.2.0     2016-02-26 CRAN (R 3.3.3)
##  inline         0.3.14    2015-04-13 CRAN (R 3.3.3)
##  labeling       0.3       2014-08-23 CRAN (R 3.3.2)
##  lattice        0.20-34   2016-09-06 CRAN (R 3.3.3)
##  lazyeval       0.2.0     2016-06-12 CRAN (R 3.3.3)
##  magrittr       1.5       2014-11-22 CRAN (R 3.3.3)
##  MASS           7.3-45    2016-04-21 CRAN (R 3.3.3)
##  Matrix         1.2-8     2017-01-20 CRAN (R 3.3.3)
##  methods      * 3.3.3     2017-03-06 local         
##  munsell        0.4.3     2016-02-13 CRAN (R 3.3.3)
##  plyr           1.8.4     2016-06-08 CRAN (R 3.3.3)
##  RColorBrewer   1.1-2     2014-12-07 CRAN (R 3.3.2)
##  Rcpp           0.12.10   2017-03-19 CRAN (R 3.3.3)
##  RcppEigen      0.3.2.9.1 2017-03-15 CRAN (R 3.3.3)
##  reshape2       1.4.2     2016-10-22 CRAN (R 3.3.3)
##  rstan        * 2.14.2    2017-03-19 CRAN (R 3.3.3)
##  scales         0.4.1     2016-11-09 CRAN (R 3.3.3)
##  StanHeaders  * 2.14.0-1  2017-01-09 CRAN (R 3.3.3)
##  stats        * 3.3.3     2017-03-06 local         
##  stats4         3.3.3     2017-03-06 local         
##  stringi        1.1.3     2017-03-21 CRAN (R 3.3.3)
##  stringr        1.2.0     2017-02-18 CRAN (R 3.3.3)
##  tibble         1.3.0     2017-04-01 CRAN (R 3.3.3)
##  tools          3.3.3     2017-03-06 local         
##  utils        * 3.3.3     2017-03-06 local

References

Baum, Leonard E., and J. A. Eagon. 1967. “An Inequality with Applications to Statistical Estimation for Probabilistic Functions of Markov Processes and to a Model for Ecology.” Bulletin of the American Mathematical Society 73 (3). American Mathematical Society (AMS): 360–64. doi:10.1090/s0002-9904-1967-11751-8.

Baum, Leonard E., Ted Petrie, George Soules, and Norman Weiss. 1970. “A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains.” The Annals of Mathematical Statistics 41 (1). Institute of Mathematical Statistics: 164–71. doi:10.1214/aoms/1177697196.

Bengio, Yoshua, and Paolo Frasconi. 1995. “An Input Output Hmm Architecture.”

Betancourt, Michael. 2017. “Identifying Bayesian Mixture Models.” Stan Case Studies, no. Volume 4. http://mc-stan.org/users/documentation/case-studies/identifying_mixture_models.html.

Carpenter, Bob, Andrew Gelman, Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Michael A Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2016. “Stan: A Probabilistic Programming Language.” Journal of Statistical Software 20.

Cook, Samantha R, Andrew Gelman, and Donald B Rubin. 2006. “Validation of Software for Bayesian Models Using Posterior Quantiles.” Journal of Computational and Graphical Statistics 15 (3). Taylor & Francis: 675–92.

Gelman, Andrew, and Donald B Rubin. 1992. “Inference from Iterative Simulation Using Multiple Sequences.” Statistical Science. JSTOR, 457–72.

Gelman, Andrew, Kenneth Shirley, and others. 2011. “Inference from Simulations and Monitoring Convergence.” Handbook of Markov Chain Monte Carlo, 163–74.

Hassan, Md Rafiul, and Baikunth Nath. 2005. “Stock Market Forecasting Using Hidden Markov Model: A New Approach.” In Intelligent Systems Design and Applications, 2005. Isda’05. Proceedings. 5th International Conference on, 192–96. IEEE.

Stan Development Team. 2017a. “RStan: The R Interface to Stan.” http://mc-stan.org/.

———. 2017b. “Shinystan: Interactive Visual and Numerical Diagnostics and Posterior Analysis for Bayesian Models.” http://mc-stan.org/.

Team, Stan Development. 2017. Stan Modeling Language: User’s Guide and Reference Manual: Version 2.15.0.


  1. See The folk theorem of statistical computing