RDP 2023-07: Identification and Inference under Narrative Restrictions Appendix A: Numerical Implementation

This appendix describes a general algorithm to implement the robust Bayesian procedure under NR. GK21 propose numerical algorithms for conducting robust Bayesian inference in SVARs identified using traditional sign and zero restrictions. Their Algorithm 1 uses a numerical optimisation routine to obtain the lower and upper bounds of the identified set at each draw of $\varphi$. Obtaining the bounds via numerical optimisation may be difficult under the set of NR considered here, since the problem is non-convex. We therefore adapt Algorithm 2 of GK21, which approximates the bounds of the identified set at each draw of $\varphi$ using Monte Carlo simulation.

Algorithm A.1. Let $N\left(\varphi ,Q,{Y}^{T}\right)\ge {0}_{s×1}$ be the set of NR and let $S\left(\varphi ,Q\right)\ge {0}_{\stackrel{˜}{s}×1}$ be the set of traditional sign restrictions (excluding the sign normalisation). Assume the object of interest is ${\eta }_{i,j,h=}{{c}^{\prime }}_{i,h}\left(\varphi \right){q}_{j}.$

• Step 1: Specify a prior for $\varphi ,{\pi }_{\varphi },$ and obtain the posterior ${\pi }_{\varphi |{Y}^{T}}$.
• Step 2: Draw $\varphi$ from ${\pi }_{\varphi |{Y}^{T}}$ and check whether $𝒬\left(\varphi |{Y}^{T},N,S\right)$ is empty using the subroutine below.
• Step 2.1: Draw an n × n matrix of independent standard normal random variables, Z, and let $Z=\stackrel{˜}{Q}R$ be the QR decomposition of Z.[18]
• Step 2.2: Define

$Q=[ sign( ( Σ tr −1 e 1,n ) ′ q ˜ 1 ) q ˜ 1 ∥ q ˜ 1 ∥ ,...,sign( ( Σ tr −1 e n,n ) ′ q ˜ n ) q ˜ n ∥ q ˜ n ∥ ],$

where ${\stackrel{˜}{q}}_{j}$ is the jth column of $\stackrel{˜}{Q}$.

• Step 2.3: Check whether Q satisfies $N\left(\varphi ,Q,{Y}^{T}\right)\ge {0}_{s×1}$ and $S\left(\varphi ,Q\right)\ge {0}_{\stackrel{˜}{s}×1}$. If so, retain Q and proceed to Step 3. Otherwise, repeat Steps 2.1 and 2.2 (up to a maximum of L times) until Q is obtained satisfying the restrictions. If no draws of Q satisfy the restrictions, approximate $𝒬\left(\varphi |{Y}^{T},N,S\right)$ as being empty and return to Step 2.
• Step 3: Repeat Steps 2.1–2.3 until K draws of Q are obtained. Let $\left\{{Q}_{k},k=1,...,K\right\}$ be the K draws of Q that satisfy the restrictions and let qj,k be the jth column of Qk. Approximate $\left[\ell \left(\varphi ,{Y}^{T}\right),u\left(\varphi ,{Y}^{T}\right)\right]$ by $\left[{\mathrm{min}}_{k}{{c}^{\prime }}_{i,h}\left(\varphi \right){q}_{j,k},{\mathrm{max}}_{k}{{c}^{\prime }}_{i,h}\left(\varphi \right)q{,}_{j,k}\right]$.
• Step 4: Repeat Steps 2–3 M times to obtain $\left[\ell \left({\varphi }_{m},{Y}^{T}\right),u\left({\varphi }_{m},{Y}^{T}\right)\right]$ for m = 1, ...,M. Approximate the set of posterior means using the sample averages of $\ell \left({\varphi }_{m},{Y}^{T}\right)$ and $u\left({\varphi }_{m},{Y}^{T}\right)$.
• Step 5: To obtain an approximation of the smallest robust credible region with credibility $\alpha \in \left(0,1\right),$ define $d\left(\eta ,\varphi ,{Y}^{T}\right)=\mathrm{max}\left\{|\eta -\ell \left(\varphi ,{Y}^{T}\right)|,|\eta -u\left(\varphi ,{Y}^{T}\right)|\right\}$ and let ${\stackrel{^}{z}}_{\alpha }\left(\eta \right)$ be the sample $\alpha$ quantile of $\left\{d\left(\eta ,{\varphi }_{m},{Y}^{T}\right),m=1,...,M\right\}$. An approximated smallest robust credible interval for ${\eta }_{i,j,h}$ is an interval centered at $\mathrm{arg}{\mathrm{min}}_{\eta }{\stackrel{^}{z}}_{\alpha }\left(\eta \right)$ with radius ${\mathrm{min}}_{\eta }{\stackrel{^}{z}}_{\alpha }\left(\eta \right)$.

Algorithm 1 approximates $\left[\ell \left(\varphi ,{Y}^{T}\right),u\left(\varphi ,{Y}^{T}\right)\right]$ at each draw of $\varphi$ via Monte Carlo simulation. The approximated set will be too narrow given a finite number of draws of Q, but the approximation error will vanish as the number of draws goes to infinity. Montiel Olea and Nesbit (2021) derive bounds on the number of draws required to control approximation error.

The algorithm may be computationally demanding when the restrictions substantially truncate $𝒬\left(\varphi |{Y}^{T},N,S\right)$, because many draws of Q may be rejected at each draw of $\varphi$.[19] However, the same draws of Q can be used to compute $\ell \left(\varphi ,{Y}^{T}\right)$ and $u\left(\varphi ,{Y}^{T}\right)$ for different objects of interest, which cuts down on computation time. For example, the same draws can be used to compute the impulse responses of all variables to all shocks at all horizons of interest. Other quantities of interest can also be computed, such as impulse responses to ‘unit’ shocks (e.g. Read 2022b), forecast error variance decompositions, elements of A0 or A+, historical decompositions or structural shocks.

Footnotes

This is the algorithm used by Rubio-Ramírez et al (2010) to draw from the uniform distribution over $𝒪\left(n\right)$, except that we do not normalise the diagonal elements of R to be positive. This is because we impose a sign normalisation based on the diagonal elements of ${A}_{0}={Q}^{\prime }{\Sigma }_{tr}^{-1}$ in Step 2.2. [18]

Read and Zhu (forthcoming) develop more computationally efficient algorithms for obtaining draws of Q from a uniform distribution over the (conditional) identified set given a broad class of identifying restrictions, including NR. [19]