RDP 2023-07: Identification and Inference under Narrative Restrictions 5. Bayesian Inference under NR

This section presents approaches to conducting Bayesian inference in SVARs under NR. Section 5.1 discusses how to modify the standard Bayesian approach in AR18 to use the unconditional likelihood rather than the conditional likelihood. Section 5.2 explains how to conduct robust Bayesian inference under NR, which further addresses the issue of posterior sensitivity due to a flat likelihood.

5.1 Standard Bayesian inference

AR18 propose an algorithm for drawing from the uniform normal-inverse-Wishart posterior of $\left(\varphi ,Q\right)$ given traditional sign restrictions and NR. This is the posterior induced by a normal-inverse-Wishart prior for $\varphi$ and a uniform prior for Q. The algorithm draws $\varphi$ from a normal-inverse-Wishart distribution and Q from a uniform distribution over $𝒪\left(n\right)$, and checks whether the restrictions are satisfied. If not, the joint draw is discarded and another draw is made. If the restrictions are satisfied, the ex ante probability that the NR are satisfied at the drawn parameter values is approximated via Monte Carlo simulation. Once sufficient draws are obtained satisfying the restrictions, the draws are resampled with replacement using as importance weights the inverse of the probability that the NR are satisfied.[15]

This algorithm can be interpreted as drawing from the posterior based on the unconditional likelihood and then using importance sampling to transform into draws from the posterior based on the conditional likelihood. Drawing from the posterior based on the unconditional likelihood therefore simply requires omitting the importance-sampling step. Constructing the importance weights requires Monte Carlo integration, which can be computationally expensive, particularly when the NR constrain the structural shocks in multiple periods. Omitting the importance-sampling step can therefore ease computational burden.

The algorithm described above places more weight on values of $\varphi$ (relative to the notional normal-inverse-Wishart prior) that are more likely to satisfy the restrictions under the uniform distribution over $𝒪\left(n\right)$ (i.e. values with ‘larger’ conditional identified sets). As discussed in Uhlig (2017), it may instead be preferable to use a prior that is conditionally uniform over the identified set for Q. To draw from the posterior of $\left(\varphi ,Q\right)$ under the unconditional likelihood given a conditionally uniform prior for Q simply requires obtaining a fixed number of draws of Q at each draw of $\varphi$.

5.2 Robust Bayesian inference

Standard Bayesian inference based on the unconditional likelihood (or based on the conditional likelihood under shock-sign restrictions) is potentially sensitive to the choice of conditional prior for Q given $\varphi$, because the likelihood possesses flat regions. This section explains how to conduct robust Bayesian inference about a scalar-valued function of the structural parameters under NR and traditional sign restrictions. The approach can be viewed as performing global sensitivity analysis to assess whether posterior conclusions are robust to the choice of prior on the flat regions of the likelihood. We assume that the object of interest is an impulse response $\eta$, but the discussion applies to any other scalar-valued function of the structural parameters.

Let ${\pi }_{\varphi }$ be a prior over the reduced-form parameters $\varphi \in \Phi$, where $\Phi$ is the space of reduced-form parameters such that $𝒬\left(\varphi |S\right)$ is non-empty. A joint prior for $\left(\varphi ,Q\right)\in \Phi ×𝒪\left(n\right)$ can be written as ${\pi }_{\varphi ,Q}={\pi }_{Q|\varphi }{\pi }_{\varphi }$, where ${\pi }_{Q|\varphi }$ is supported only on $𝒬\left(\varphi |S\right)$. When there are only traditional identifying restrictions, ${\pi }_{Q|\varphi }$ is not updated by the data, because the likelihood is not a function of Q. Posterior inference may therefore be sensitive to the choice of conditional prior, even asymptotically. As discussed above, a similar issue arises under NR. The difference under NR is that ${\pi }_{Q|\varphi }$ is updated by the data through the truncation points of the unconditional likelihood. However, at each value of $\varphi$, the unconditional likelihood is flat over the set of values of Q satisfying the NR. Consequently, the conditional posterior for $Q|\varphi ,{Y}^{T}$ is proportional to the conditional prior for $Q|\varphi$ at each $\varphi$ whenever the conditional identified set for Q given $\left(\varphi ,{Y}^{T}\right)$ is non-empty.

Rather than specifying a single conditional prior for Q, the robust Bayesian approach of GK21 considers the set of all conditional priors for Q that are consistent with the identifying restrictions:

(35) $Π Q|ϕ ={ π Q|ϕ : π Q|ϕ ( 𝒬( ϕ|S ) )=1 }$

Notice that we cannot impose the NR using a particular conditional prior due to the data-dependent mapping from $\varphi$ to Q induced by the NR. However, by considering all possible conditional priors that are consistent with the traditional identifying restrictions, we trace out all possible conditional posteriors for $Q|\varphi ,{Y}^{T}$ that are consistent with the traditional identifying restrictions and the NR. This is because the NR truncate the unconditional likelihood and the traditional identifying restrictions truncate the prior for $Q|\varphi$, so the posterior for $Q|\varphi ,{Y}^{T}$ is supported only on values of Q that satisfy both sets of restrictions.

Given a particular prior for $\left(\varphi ,Q\right)$ and using the unconditional likelihood, the posterior is

(36) $π ϕ,Q| Y T , D N =1 ∝p( Y T , D N =1| ϕ,Q ) π Q|ϕ π ϕ ∝f( Y T |ϕ ) D N ( ϕ,Q, Y T ) π ϕ π Q|ϕ ∝ π ϕ| Y T π Q|ϕ D N ( ϕ,Q, Y T )$

The final expression for the posterior makes it clear that any prior for $Q|\varphi$ that is consistent with the traditional identifying restrictions is in effect further truncated by the NR (through the likelihood) once the data are realised. Generating this posterior using every prior in the set of conditional priors yields a set of posteriors for $\left(\varphi ,Q\right)$:

(37) $Π ϕ,Q| Y T , D N =1 ={ π ϕ,Q| Y T , D N =1 = π ϕ| Y T π Q|Y D N ( ϕ,Q, Y T ): π Q|ϕ ∈ Π Q|ϕ }$

Marginalising each posterior in this set induces a set of posteriors for $\eta ,{\Pi }_{\eta |{Y}^{T},{D}_{N=1}}$. Associated with each of these posteriors are quantities such as the posterior mean, median and other quantiles. For example, as we consider each possible prior within ${\Pi }_{Q|\varphi }$, we can trace out the set of all possible posterior means for $\eta$. This will always be an interval, so we can summarise this ‘set of posterior means’ by its end points:

(38) $[ ∫ Φ ℓ( ϕ, Y T )d π ϕ| Y T , ∫ Φ u( ϕ, Y T )d π ϕ| Y T ]$

where $\ell \left(\varphi ,{Y}^{T}\right)=\mathrm{inf}\left\{\eta \left(\varphi ,Q\right):Q\in 𝒬\left(\varphi |{Y}^{T},N,S\right)\right\},u\left(\varphi ,{Y}^{T}\right)$ $=\mathrm{sup}\left\{\eta \left(\varphi ,Q\right):Q\in 𝒬\left(\varphi |{Y}^{T},N,S\right)\right\}$ and $𝒬\left(\varphi |{Y}^{T},N,S\right)=\left\{\theta \left(\varphi |S\right)\cap \theta \left(\varphi |{Y}^{T},N\right)\right\}$ is the set of values of Q that are consistent with the traditional identifying restrictions and the NR (i.e. the conditional identified set). In contrast, in GK21 the set of posterior means is obtained by finding the infimum and supremum of $\eta \left(\varphi ,Q\right)$ over $𝒬\left(\varphi |S\right)$ and averaging these over ${\pi }_{\varphi |{Y}^{T}}$. The important difference from GK21 is that the current set of posterior means depends on the data not only through the posterior for $\varphi$ but also through the conditional identified set generated by the NR. As a result, in contrast with GK21, we cannot interpret the set of posterior means (Equation (38)) as a consistent estimator for the identified set for $\eta$ (which is not well-defined, as we discussed above). Nevertheless, the set of posterior means still carries a robust Bayesian interpretation similar to GK21 in that it clarifies posterior results that are robust to the choice of prior on the non-updated part of the parameter space (i.e. on the flat regions of the likelihood).

As in GK21, we can also report a robust credible region with credibility level $\alpha$. This is the shortest interval estimate for $\eta$ such that the posterior probability put on the interval is greater than or equal to $\alpha$ uniformly over the posteriors in ${\Pi }_{\eta |{Y}^{T},{D}_{N=1}}$ (see Proposition 1 of GK21). We can also report posterior lower and upper probabilities. These are the infimum and supremum, respectively, of the probability for a hypothesis over all posteriors in the set.

To numerically implement this robust Bayesian procedure, we extend the numerical algorithms in GK21 to handle NR. We approximate the bounds of the conditional identified set at each value of $\varphi$ using a simulation-based approach based on Algorithm 2 of GK21. See Appendix A for details.

Footnote

Based on the results in Arias et al (2018), AR18 argue that their algorithm draws from a normal-generalised-normal posterior for the SVAR's structural parameters (A0,A+) induced by a conjugate normal-generalised-normal prior, conditional on the restrictions. [15]