See below for details on model specification.

mixture(K, R, observation = NULL, initial = NULL, transition = NULL,
  name = "")

Arguments

K

An integer with the number of hidden states.

R

An integer with the dimension of the observation vector (e.g. one is univariate, two is bivariate)

observation

One density, or more than one density chained with the `+` operator, describing the observation model. See below and future vignette for detailed explanation.

initial

One density, or more than one density chained with the `+` operator, describing the initial distribution model. See below and future vignette for detailed explanation.

transition

One density, or more than one density chained with the `+` operator, describing the transition model. See below and future vignette for detailed explanation.

name

An optional character string with a name for a model.

Value

A Specification object that may be used to validate calibration (validate_calibration), compiled (compile), generate data from sim, or fit a model to data to obtain a point estimate (optimizing) or run full-bayesian inference via Markov-chain Monte Carlo (fit).

Model specification

A Hidden Markov Model may be seen as three submodels that jointly specify the dinamics of the observed random variable. To specify the observation, initial distribution, and transition models, we designed S3 objects called Density that define density form, parameter priors, and fixed values for parameters. These are flexible enough to include bounds in the parameter space as well as truncation in prior densities.

Internally, a Specification object is a nested list storing either K multivariate densities (i.e. one multivariate density for state) or K x R univariate densities (i.e. one univariate density for each dimension in the observation variable and each state). (what does this depend on?) However, the user is not expected to known the internal details of the implementation. Instead, user-input will be interpreted based on three things: the dimension of the observation vector R, the number of densities given by the user, and the type of density given by the user.

Univariate observation model (i.e. R = 1):

  1. Enter one univariate density if you want the observation variable to have the same density and parameter priors in every hidden state. Note that, although the user input is recycled for every hidden state for the purpose of model specification, parameters are not shared across states. All parameters are free.

  2. Enter K univariate densities if you want the observation variable to have different densities and/or parameter priors in each hidden state.

  # Assume K = 2, R = 1
  # 1. Same density and priors in every state
  observation = Gaussian(
    mu    = Gaussian(0, 10),
    sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
  )
    # 2. (a) Different priors for each of the K = 2 possible hidden states
  observation =
    Gaussian(
      mu    = Gaussian(0, 10),
      sigma = Student(mu = 0, sigma =  1, nu = 1, bounds = list(0, NULL))
    ) +
    Gaussian(
      mu    = Gaussian(0, 10),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    )
    # 2. (b) Different densities for each of the K = 2 possible hidden states
       (i.e. the observed variable has heavy tails on the second state).
  observation =
    Gaussian(
      mu    = Gaussian(0, 10),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    ) +
    Student(
      mu    = Gaussian(0, 10),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
      nu    = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    )

Multivariate observation model (i.e. R > 1):

  1. Enter one univariate density if you want every dimension of the observation vector to have the same density and parameter priors in every hidden state. In this case, the user specifies the marginal density of independent random variables.

  2. Enter one multivariate density if you want the observation vector to have the same joint density and parameter priors in every hidden state. In this case, the user specifies the joint density of random variables.

  3. Enter K univariate densities if you want every dimension of the observation vector to have the same density and parameter priors within each hidden state. In this case, the user specifies the marginal density of independent random variables for each state. In other words, given a latent state, each variable in the observation vector will have the same density and parameter priors.

  4. Enter K multivariate densities if you want the observation vector to have different densities and/or parameter priors in each hidden state. In this case, the user specifies a joint density and parameter priors that varies per state.

  5. Enter R univariate densities if you want each dimension of the observation vector to have different densities and parameter priors, but these specification should the same in every hidden state. In this case, the user specifies the marginal density of independent elements of a random vector, and this specification is the same in all latent states.

  6. Enter R x K univariate densities if you want each dimension of the observation vector to have different densities and parameter priors in each hidden state. In this case, the user specifies the marginal density of independent elements of a random vector which also varies for each latent state.

  # Assume K = 2, R = 2
  # 1. Same density for every dimension of the random vector and hidden state
  observation = Gaussian(
    mu    = Gaussian(0, 10),
    sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
  )
    # 2. Same density for the random vector in every hidden state
  observation = MVGaussianCor(
    mu    = MVGaussian(mu = c(0, 0), sigma = matrix(c(100, 0, 0, 100), 2, 2)),
    L     = LKJCor(eta = 2)
  )
    # 3. Different density for each dimension of the random vector, but the
       specification is the same across hidden states (i.e. we believe the
       observed variable has heavy tails on the second state).
  observation =
    Gaussian(
      mu    = Gaussian(0, 10),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    ) +
    Student(
      mu    = Gaussian(0, 10),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
      nu    = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    )
    # 4. Different priors in each hidden state
  observation =
    MVGaussianCor(
      mu    = MVGaussian(mu = c(0, 0), sigma = matrix(c(100, 0, 0, 100), 2, 2)),
      L     = LKJCor(eta = 2)
    ) +
    MVGaussianCor(
      mu    = MVGaussian(mu = c(1, -1), sigma = matrix(c(100, 0, 0, 100), 2, 2)),
      L     = LKJCor(eta = 2)
    )
    # 5. Cannot be used in this case since K = R = 2. See paragraph below.
    # 6. Different density for for each dimension and hidden state
    Gaussian(
      mu    = Gaussian(0, 10),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    ) +
    Student(
      mu    = Gaussian(0, 10),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
      nu    = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    ) +
    Gaussian(
      mu    = Gaussian(0, 1),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    ) +
    Student(
      mu    = Gaussian(0, 1),
      sigma = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
      nu    = Student(mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL))
    )

The last specifications, admittedly intricate and little natural, are provided in case the user has very specific modeling needs. When K = R, cases 3 and 5 clash and the software will read it as case 3. Note that, although the user input is recycled for every for the purpose of model specification, parameters are never shared across dimensions or states. All parameters are free.

Initial model:

  1. Enter one univariate density if you want each every initial state probability to have the same parameter priors. In this case, the user specifies the marginal density of independent initial state probabilities. Note that, although these priors may not guarantee per se that the elements of initial vector sum to one, this is not problematic for estimation. The log posterior density will only be affected by the relative strength of these priors even if they are not normalized.

  2. Enter K univariate densities if you want each initial state probability to have different priors. Additional comments from the previous item apply.

  3. Enter one multivariate density if you want to define a joint prior for all the elements of the initial distribution vector.

  # Assume K = 2
  # 1. Same prior (uniform) for every initial state probability
  initial = Beta(alpha = 1, beta = 1)
    # 2. Different priors for each of the K = 2 initial state probabilities
  initial = Beta(alpha = 0.7, beta = 1) + Beta(alpha = 1, beta = 0.7)
    # 2. One multivariate prior for the initial state vector
  initial = Dirichlet(alpha = c(1, 1))

Specification #3 is most suitable for most of the problem, unless the user has very specific modeling needs. Useful densities for this model include Beta and Dirichlet.

Transition model:

  1. Enter one univariate density if you want all the K x K elements of the transition matrix to have the same prior. In this case, the user specifies the marginal prior of transition probabilities. Note that, although these priors may not guarantee per se that the elements of each row sum to one, this is not problematic for estimation. The log posterior density will only be affected by the relative strength of these priors even if they are not normalized.

  2. Enter one multivariate density if you want every K-sized row of the transition matrix to have the same prior. In this case, the user specifies the joint prior of the transition probability for any given starting state.

  3. Enter K univariate densities if you want each element of any given row to have different priors. In this case, the user specifies the marginal prior for the K transition probabilities for any given starting state. Additional comments from the first item apply.

  4. Enter K multivariate densities if you want each K-sized row of the transition matrix to have the multivariate same prior. In this case, the user specifies the joint prior of the transition probability that varies for each starting state.

  5. Enter KxK univariate densities if you want each element of the transition matrix to have a different prior. In this case, the user specifies the marginal prior for the KxK transition probabilities. Additional comments from the first item apply.

  # Assume K = 2
  # 1. Same prior (uniform) for each of the KxK elements of the matrix
  transition = Beta(alpha = 1, beta = 1)
    # 2. Same prior (uniform) for each of the K rows of the matrix
  transition = Dirichlet(alpha = c(1, 1))
    # 3. Different priors for each element in a row
  transition = Beta(alpha = 0.7, beta = 1) + Beta(alpha = 1, beta = 0.7)
    # 4. Different priors for each row
  transition = Dirichlet(alpha = c(0.7, 1)) + Dirichlet(alpha = c(1, 0.7))
    # 5. Different priors for each element in the matrix
  transition =
    Beta(alpha = 0.7, beta = 1) + Beta(alpha = 1, beta =   1) +
    Beta(alpha =   1, beta = 1) + Beta(alpha = 1, beta = 0.7)

Specifications #2 and #4 are most suitable for most of the problem, unless the user has very specific modeling needs. Useful densities for this model include Beta and Dirichlet.

Fixed parameters Note that fixed parameters may be specified following this example:

  # Gaussian density with fixed standard deviation
  observation = Gaussian(
    mu    = Gaussian(0, 10),
    sigma = 1
  )

See also

Other models: hmm, specify

Examples

mySpec <- hmm( K = 2, R = 1, observation = Gaussian( mu = Gaussian(0, 10), sigma = Student( mu = 0, sigma = 10, nu = 1, bounds = list(0, NULL) ) ), initial = Dirichlet(alpha = c(1, 1)), transition = Dirichlet(alpha = c(1, 1)), name = "Univariate Gaussian Hidden Markov Model" ) explain(mySpec)
#> ________________________________________________________________________________ #> UNIVARIATE GAUSSIAN HIDDEN MARKOV MODEL #> ________________________________________________________________________________ #> #> Univariate observations (R = 1). #> Observation model for Variable 1 in State 1 #> Variable Density: Gaussian (-infty, infty) #> Free parameters: 2 (mu, sigma) #> mu : real mu11; #> Prior Density: Gaussian (-infty, infty) #> Fixed parameters: 2 (mu = 0, sigma = 10), #> sigma : real<lower = 0> sigma11; #> Prior Density: Student [0, infty) #> Fixed parameters: 3 (mu = 0, sigma = 10, nu = 1) #> #> Observation model for Variable 1 in State 2 #> Variable Density: Gaussian (-infty, infty) #> Free parameters: 2 (mu, sigma) #> mu : real mu21; #> Prior Density: Gaussian (-infty, infty) #> Fixed parameters: 2 (mu = 0, sigma = 10), #> sigma : real<lower = 0> sigma21; #> Prior Density: Student [0, infty) #> Fixed parameters: 3 (mu = 0, sigma = 10, nu = 1) #> #> #> Initial distribution model #> Prior Density: Dirichlet (-infty, infty) #> Fixed parameters: 1 (alpha = [1, 1]) #> #> Transition model #> Prior Density: Dirichlet (-infty, infty) #> Fixed parameters: 1 (alpha = [1, 1]) #> #>