This document will attempt to take the user through how to replicate the analysis in RDP 2021-04: Monetary Policy, Equity Markets and the Information Effect by Calvin He. The analysis below is not in the same order as in the RDP, but rather organised by topic and output. Figure numbers and captions are the same as they are in the RDP for ease of cross-referencing.
Most of the data used in this document are derived series. This document will not show how these derived series were constructed. Please see ./analysis/monetary-policy-surprises.Rmd for details on the construction of the monetary policy surprises using synthetic data.
Attach required libraries and load user defined functions.
library(data.table)
library(lmtest)
library(sandwich)
library(lubridate)
library(broom)
library(readr)
library(here)
library(strucchange)
library(kableExtra)
library(patchwork)
library(tidyverse)
::walk(list.files(here::here("R"), full.names = T), source) # import user defined functions purrr
To conduct the main analysis, first we need to import the monetary policy surprises. Two types are used in the analysis:
mp_surprises
which is the main monetary policy surprise series constructed from using only monetary policy announcements<- read_csv(here::here("data", "mp-surprises", "mp-surprises.csv" )) mp_surprises
mp_surprises_all
is constructed in the same way as mp_surprises
but the series is augmented to also include the release of the SMP, RBA Board minutes and speeches delivered by the Governor. I have joined these data with mp_dates
which categorises each date to a type of monetary policy event.<- read_csv(here::here("data", "mp-surprises", "mp-surprises-all.csv" ))
mp_surprises_all <- read_csv(here::here("data", "mp-surprises", "mp-dates.csv" )) %>%
mp_dates filter(date_time >= min(mp_surprises_all$decision_date_time))
<- mp_surprises_all %>% left_join(mp_dates, by = c("decision_date_time" = "date_time"))
mp_surprises_all
mp_surprises_all
## # A tibble: 614 x 5
## decision_date decision_date_time pc1_scaled cashrate_change type
## <date> <dttm> <dbl> <dbl> <chr>
## 1 2001-04-04 2001-04-04 09:30:00 -0.104 -0.5 board_meeting
## 2 2001-05-02 2001-05-02 09:30:00 0.0361 0 board_meeting
## 3 2001-06-06 2001-06-06 09:30:00 0.0511 0 board_meeting
## 4 2001-06-14 2001-06-14 12:30:00 -0.00153 NA speech_governor
## 5 2001-07-04 2001-07-04 09:30:00 -0.00561 0 board_meeting
## 6 2001-07-10 2001-07-10 18:30:00 -0.0216 NA speech_governor
## 7 2001-08-08 2001-08-08 09:30:00 0.0126 0 board_meeting
## 8 2001-08-13 2001-08-13 11:30:00 0.00606 NA smp
## 9 2001-09-05 2001-09-05 09:30:00 -0.0444 -0.25 board_meeting
## 10 2001-10-03 2001-10-03 09:30:00 0.0870 -0.25 board_meeting
## # ... with 604 more rows
Both monetary policy surprise objects contain two types of ‘monetary policy surprises’
cashrate_change
: this is the cash rate change announced after the RBA Board meetingpc1_scaled
: this is the primary measure of monetary policy surprises used in the analysis. See section 4.2.2 of the RDP for further details.All results using the ASX 200 index as the dependent variable are replicated below. This includes Figures 3, 12, 14, 16 and 7.
The general process to obtain the results is as follows:
mp_surprises
)First, import the changes in the ASX 200 around monetary policy announcements.
<- read_csv(here::here("data", "equities", "asx-shock.csv")) equities_surprise
Next, join the change in the ASX 200 with the monetary policy surprises. This creates the necessary data for the regressions
<- left_join(mp_surprises , equities_surprise , by = c("decision_date_time" = "date_time"))
mp_equities mp_equities
## # A tibble: 212 x 5
## decision_date decision_date_time pc1_scaled cashrate_change asx_shock
## <date> <dttm> <dbl> <dbl> <dbl>
## 1 2001-04-04 2001-04-04 09:30:00 -0.110 -0.5 -0.506
## 2 2001-05-02 2001-05-02 09:30:00 0.0396 0 0.533
## 3 2001-06-06 2001-06-06 09:30:00 0.0559 0 0.391
## 4 2001-07-04 2001-07-04 09:30:00 -0.00503 0 -0.716
## 5 2001-08-08 2001-08-08 09:30:00 0.0146 0 0.120
## 6 2001-09-05 2001-09-05 09:30:00 -0.0455 -0.25 -0.296
## 7 2001-10-03 2001-10-03 09:30:00 0.0942 -0.25 0.105
## 8 2001-11-07 2001-11-07 09:30:00 0.000254 0 -0.277
## 9 2001-12-05 2001-12-05 09:30:00 NA -0.25 0.710
## 10 2002-02-06 2002-02-06 09:30:00 NA 0 -0.281
## # ... with 202 more rows
I will first create a basic specification table (spec
). This table effectively contains ‘options’ for the regression and includes:
remove_gfc
: should the global financial crisis period be excluded?asymmetry
: should the regression estimate positive and negative surprises separately?<- expand_grid(remove_gfc = c(T, F) ,
spec asymmetry = c(T, F)) %>%
filter(!(remove_gfc == T & asymmetry == T))
Using the specification table, I will add the data and row-by-row perform information effect regressions. The output from each row will differ based on the specifications. For each specification, three regressions are run using the info_regressions
function:
pc1_scaled
: using the monetary policy surprise (pc1_scaled
) as the independent variablecashrate_change
: using the cash rate change (cashrate_change
) as the independent variablepc1_scaled_move
: interacting the monetary policy surprise with whether a cash rate change occurred (used later in the analysis).<- spec %>%
output_asx200 mutate(data = list(mp_equities)) %>% # add complete data
rowwise() %>%
mutate(reg_data = list(if(remove_gfc){
%>% filter(decision_date < ymd("2008-09-01")|decision_date >= ymd("2009-05-01") )}
data else {data} ), # create regression data based on `remove_gfc`
reg_output =
list(info_regressions(reg_data, y_var = "asx_shock", asymmetry = asymmetry) )) %>% # perform regression
as_tibble() %>%
mutate(reg_coeftest = map(reg_output ,
~ map(. , ~ lmtest::coeftest(., vcov = NeweyWest) ) ) , # perform hypothesis tests
reg_tidy = map(reg_coeftest, ~ map_dfr(., broom::tidy, .id = "model"))) # tidy data
output_asx200
## # A tibble: 3 x 7
## remove_gfc asymmetry data reg_data reg_output reg_coeftest reg_tidy
## <lgl> <lgl> <list> <list> <list> <list> <list>
## 1 TRUE FALSE <spec_tb~ <spec_tbl_d~ <named lis~ <named list~ <tibble[~
## 2 FALSE TRUE <spec_tb~ <spec_tbl_d~ <named lis~ <named list~ <tibble[~
## 3 FALSE FALSE <spec_tb~ <spec_tbl_d~ <named lis~ <named list~ <tibble[~
The output for all specifications now lives inside output_asx200
.
Figure 3 can be reconstructed by accessing the reg_tidy
column which contains the coefficient outputs for each specification.
%>%
output_asx200 filter(remove_gfc == F, asymmetry == F) %>% # baseline specification
unnest(reg_tidy) %>%
relocate(remove_gfc, asymmetry, model, term, estimate, std.error, p.value) # rearrange for nice printing
## # A tibble: 7 x 12
## remove_gfc asymmetry model term estimate std.error p.value data reg_data
## <lgl> <lgl> <chr> <chr> <dbl> <dbl> <dbl> <lis> <list>
## 1 FALSE FALSE pc1_s~ (Inter~ 0.00681 0.0346 8.44e- 1 <spe~ <spec_t~
## 2 FALSE FALSE pc1_s~ pc1_sc~ -2.99 0.434 7.75e-11 <spe~ <spec_t~
## 3 FALSE FALSE cashr~ (Inter~ 0.0176 0.0183 3.37e- 1 <spe~ <spec_t~
## 4 FALSE FALSE cashr~ cashra~ 0.0430 0.0856 6.16e- 1 <spe~ <spec_t~
## 5 FALSE FALSE pc1_s~ (Inter~ 0.00353 0.0368 9.24e- 1 <spe~ <spec_t~
## 6 FALSE FALSE pc1_s~ pc1_sc~ -2.19 1.16 5.98e- 2 <spe~ <spec_t~
## 7 FALSE FALSE pc1_s~ pc1_sc~ -3.43 0.576 1.21e- 8 <spe~ <spec_t~
## # ... with 3 more variables: reg_output <list>, reg_coeftest <list>,
## # statistic <dbl>
Also note that in addition to the usual tidy output from a regression, within each reg_tidy
object you will also find an additional column model
which is the allocated model name for each regression.
Figure 3 can now be replicated with the following code:
%>%
output_asx200 filter(remove_gfc == F, asymmetry == F) %>%
unnest(reg_tidy) %>%
filter(term != "(Intercept)", model %in% c("pc1_scaled", "cashrate_change")) %>%
mutate(term = case_when(term == "pc1_scaled" ~ "Monetary policy surprise",
~ "Cash rate change")) %>%
T ggplot(aes(x = term)) +
geom_col(aes(y = estimate, fill = term), position = position_dodge()) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width = 0.2, position = position_dodge()) +
ylab("%") +
xlab("") +
rba_theme() +
rba_syc() +
rba_ylim(-5, 2.5) +
scale_fill_manual(values = c(rba[["orange1"]], rba[["aqua3"]]) ) +
labs(title = "Figure 3: Response of ASX 200\nto Monetary Policy",
subtitle = "100 basis point monetary policy tightening",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv") +
rba_panel_text_size(title = 1.8, subtitle = 1.2) +
rba_axis_text_size(size = 14)
To compare the coefficients when including or excluding the GFC period, I can manipulate the output_asx200
object as follows to create Figure 12:
%>%
output_asx200 filter(asymmetry == F) %>% # remove asymmetry specification
unnest(reg_tidy) %>%
mutate(type = case_when(remove_gfc ~ "Excluding GFC",
~ "Full sample")) %>%
T filter(term == "pc1_scaled") %>%
ggplot(aes(x = type)) +
geom_col(aes(y = estimate, fill = type), position = position_dodge()) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width = 0.2, position = position_dodge()) +
ylab("%") +
xlab("") +
rba_theme() +
rba_syc() +
rba_ylim(-5, 2.5) +
rba_ylab_position(-7)+
scale_fill_manual(values = c(rba[["orange9"]], rba[["green6"]])) +
labs(title = "Figure 12: Response of\nASX 200 to Monetary Policy",
subtitle = "100 basis point monetary policy tightening, by sample",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
To extract the results comparing when the monetary policy surprise is positive or negative (Figure 14), simply filter for asymmetry = T
.
%>%
output_asx200 filter(asymmetry == T) %>% # filter for desired specification
unnest(reg_tidy) %>% # unnest regression results
filter(str_detect(term, "pc1_scaled") ) %>% # extract desired coefficient
mutate(term = str_remove(term, paste0("pc1_scaled:"))) %>% # clean term
ggplot(aes(x = term)) +
geom_col(aes(y = estimate, fill = term), position = position_dodge()) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width = 0.2, position = position_dodge()) +
ylab("%") +
xlab("") +
rba_theme() +
rba_syc() +
rba_ylim(-5, 2.5) +
scale_fill_manual(values = c(rba[["red2"]], rba[["blue7"]]))+
labs(title = "Figure 14: Response of ASX 200\nto Monetary Policy",
subtitle = "100 basis point monetary policy tightening,\nby monetary policy surprise direction",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
To compare the coefficients for when the cash rate was changed or not (Figure 16), simply access the model pc1_scaled_move
while setting the other specifications to be FALSE
(i.e. baseline).
%>%
output_asx200 filter(remove_gfc == F, asymmetry == F) %>% # baseline specification options
unnest(reg_tidy) %>%
filter(term != "(Intercept)", model == "pc1_scaled_move" ) %>% # filter for the change vs. no change model
mutate(term = case_when(stringr::str_detect(term, "FALSE") ~ "No change",
TRUE ~ "Change")) %>%
ggplot(aes(x = term)) +
geom_col(aes(y = estimate, fill = term), position = position_dodge()) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width = 0.2, position = position_dodge()) +
ylab("%") +
xlab("") +
rba_theme() +
rba_syc() +
rba_ylim(-5, 2.5) +
scale_fill_manual(values = c(rba[["olive1"]], rba[["purple1"]] )) +
labs(title = "Figure 16: Response of ASX 200\nto Monetary Policy",
subtitle = "100 basis point monetary policy tightening,\nby if cash rate changed",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
To extend the ASX 200 results I use the augmented monetary policy surprise series (mp_surprises_all
). Performing similar steps to above, I import the relevant ASX 200 price changes around RBA communication times, then join the data to the augmented monetary policy surprise series.
<- read_csv(here::here("data", "equities", "asx-shock-all.csv"))
equities_surprise_all
<- left_join(mp_surprises_all , equities_surprise_all,
mp_equities_all by = c("decision_date_time" = "date_time"))
mp_equities_all
## # A tibble: 614 x 6
## decision_date decision_date_time pc1_scaled cashrate_change type asx_shock
## <date> <dttm> <dbl> <dbl> <chr> <dbl>
## 1 2001-04-04 2001-04-04 09:30:00 -0.104 -0.5 board~ -0.506
## 2 2001-05-02 2001-05-02 09:30:00 0.0361 0 board~ 0.533
## 3 2001-06-06 2001-06-06 09:30:00 0.0511 0 board~ 0.391
## 4 2001-06-14 2001-06-14 12:30:00 -0.00153 NA speec~ 0.313
## 5 2001-07-04 2001-07-04 09:30:00 -0.00561 0 board~ -0.716
## 6 2001-07-10 2001-07-10 18:30:00 -0.0216 NA speec~ -0.0745
## 7 2001-08-08 2001-08-08 09:30:00 0.0126 0 board~ 0.120
## 8 2001-08-13 2001-08-13 11:30:00 0.00606 NA smp -0.262
## 9 2001-09-05 2001-09-05 09:30:00 -0.0444 -0.25 board~ -0.296
## 10 2001-10-03 2001-10-03 09:30:00 0.0870 -0.25 board~ 0.105
## # ... with 604 more rows
Notice the column type
contains the specific RBA communication type of each event.
I now perform the regressions in a similar way (rowwise within a dataframe) but in this case only one regression is run (Equation 5 in RDP).
<- mp_equities_all %>%
output_asx200_all nest_by() %>%
as_tibble() %>%
mutate(reg_output = map(data, ~ lm(asx_shock ~ pc1_scaled:factor(type), data = .)), # run regression
reg_coeftest = map(reg_output , ~ lmtest::coeftest(., vcov = NeweyWest) ), # perform hypothesis tests for coefficients
reg_tidy = map(reg_coeftest, broom::tidy)) # tidy output
Now, Figure 7 can be replicated with the following code:
%>%
output_asx200_all unnest(reg_tidy) %>%
filter(term != "(Intercept)") %>%
mutate(term = str_remove_all(term, "pc1_scaled:factor\\(type\\)") %>% str_replace_all(., "_", " ") ) %>%
ggplot(aes(x = term, y = estimate)) +
geom_col(position = position_dodge(), fill = rba[["pink4"]]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width = 0.2, position = position_dodge()) +
rba_theme() +
rba_rotate_x_text() +
rba_syc() +
ylab("%") +
rba_ylab_position(-6) +
xlab("") +
rba_ylim(-20, 30 ) +
labs(title = "Figure 7: Response of ASX 200\nto Monetary Policy",
subtitle = "100 basis point monetary policy tightening, by communication type",
caption = "* Confidence intervals shown are 95 per cent and calculated using Newey-West standard errors.
\nSources: Author's calculations; Bloomberg; RBA; Refinitiv")+
rba_panel_text_size(title = 1.8, subtitle = 1.2)
All results using the equity earnings growth forecasts changes from I/B/E/S from Refinitiv as the dependent variable are replicated below (Figure 4, 10, 11, 15, 17 and 8) . The code is set-up in a very similar manner to the ASX 200 results, with the main exception being that there are now different types of equity earnings growth forecasts (by GICS sectors and horizon).
First, import the weekly changes in equity earnings growth forecasts.
<- read_csv(here::here("data", "earnings", "earnings-shock.csv"))
earnings_surprise
# column `lead_d_1w` is the one week change relative to forecast that occurs next week. earnings_surprise
## # A tibble: 3,583 x 7
## date horizon industry group next_forecast_d~ last_forecast_d~ lead_d_1w
## <date> <chr> <chr> <chr> <date> <date> <dbl>
## 1 2006-02-02 long t~ asx200 asx2~ 2006-02-09 2006-01-26 -1.69
## 2 2006-03-02 long t~ asx200 asx2~ 2006-03-09 2006-02-23 -0.0150
## 3 2006-03-30 long t~ asx200 asx2~ 2006-04-06 2006-03-23 0.183
## 4 2006-04-27 long t~ asx200 asx2~ 2006-05-04 2006-04-20 0.281
## 5 2006-06-01 long t~ asx200 asx2~ 2006-06-08 2006-05-25 -0.187
## 6 2006-06-29 long t~ asx200 asx2~ 2006-07-06 2006-06-22 -0.205
## 7 2006-07-27 long t~ asx200 asx2~ 2006-08-03 2006-07-20 0.583
## 8 2006-08-31 long t~ asx200 asx2~ 2006-09-07 2006-08-24 0.138
## 9 2006-09-28 long t~ asx200 asx2~ 2006-10-05 2006-09-21 0.062
## 10 2006-11-02 long t~ asx200 asx2~ 2006-11-09 2006-10-26 -0.162
## # ... with 3,573 more rows
For each industry and horizon, I match forecast changes to a monetary policy date. To do this I only keep forecast changes where a monetary policy announcement occurred between two forecast dates.
<- earnings_surprise %>%
earnings_data_mp mutate(forecast_date = date) %>%
group_by(group, industry, horizon) %>%
nest() %>%
mutate(earnings_data = data,
# Make monetary policy surprises to a forecast date
mp = map(data, ~ rolling_join(mp_surprises %>% mutate(date = decision_date),
%>% transmute(date = forecast_date, forecast_date, next_forecast_date, last_forecast_date),
{.} roll_type = T ) %>%
as_tibble() %>%
filter(decision_date > forecast_date , decision_date < next_forecast_date) %>% # decision between the two forecast dates
group_by(forecast_date) %>%
summarise_if(is.numeric, sum, na.rm = T) %>%
ungroup ) ,data = map2(data, mp, ~ left_join(.x %>% filter(forecast_date %in% .y$forecast_date), .y, by = "forecast_date"))) # only keep relevant forecasts
Using the same specification table (spec
), I can perform the same regressions as before.
<- expand_grid(earnings_data_mp, spec) %>%
output_earnings rowwise() %>%
mutate(reg_data = list(if(remove_gfc){ data %>% filter(date < ymd("2008-09-01")|date >= ymd("2009-05-01") )}
else {data} ),
reg_output = list(info_regressions( reg_data, y_var = "lead_d_1w", asymmetry =asymmetry) )) %>%
as_tibble() %>%
mutate(reg_coeftest = map(reg_output , ~ map(. , ~ tryCatch(lmtest::coeftest(., vcov = NeweyWest) ,
error = function(c) lmtest::coeftest(., vcov = vcovHAC)))),
reg_tidy = map(reg_coeftest, ~ map_dfr(., broom::tidy, .id = "model")))
Figure 4, can now be replicated with the following code:
%>%
output_earnings filter(industry == "asx200", asymmetry == F, remove_gfc == F) %>%
unnest(reg_tidy) %>%
filter(model %in% c("pc1_scaled", "cashrate_change"), term != "(Intercept)") %>%
mutate(horizon = forcats::fct_relevel(horizon, "one year ahead"),
term = case_when(term == "pc1_scaled" ~ "monetary policy surprise",
~ "cash rate change")) %>%
T ggplot(aes( x = horizon, y = estimate, fill = term)) +
geom_col(position = position_dodge()) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error), width =0.2, position = position_dodge(width = 0.9)) +
rba_theme(legend = T) +
scale_fill_manual(values = c(rba[["orange1"]], rba[["aqua3"]]) ) +
rba_syc() +
ylab("ppt") +
xlab("") +
rba_ylim(-5, 2 )+
labs(title = "Figure 4: Response of ASX 200 Earnings\nGrowth Forecasts to Monetary Policy",
subtitle = "100 basis point monetary policy tightening",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv; Thomson Reuters") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
The object output_earnings
also contains the responses of industry level earnings growth forecasts to monetary policy (Figure 10 and 11).
# filter for appropriate graph data
<- output_earnings %>%
graph_data filter(industry != "asx200", horizon == "one year ahead", asymmetry == F, remove_gfc == F) %>%
unnest(reg_tidy ) %>%
::select(-contains("reg"), -data) %>%
dplyr::filter(model == "pc1_scaled", term != "(Intercept)")
dplyr
# Business services plot
<- graph_data %>% filter(group == "business_services") %>%
one_year_business_services ggplot( aes( x = industry, y = estimate)) +
geom_col(position = position_dodge(), fill = rba[["green1"]]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position=position_dodge(width = 0.9)) +
labs(subtitle = "Business Services") +
rba_theme() +
ylab("ppt") +
xlab("") +
rba_ylim(-20, 10) +
rba_syc(dup = F)+
theme( plot.subtitle = element_text(size = rel(1.3))) +
rba_rotate_x_text(angle = 90)
# Production sector
<- graph_data %>% filter(group == "production") %>%
one_year_production ggplot( aes( x = industry, y = estimate)) +
geom_col(position=position_dodge(), fill = rba[["brown2"]] ) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position=position_dodge(width = 0.9)) +
rba_theme() +
ylab("") +
labs(subtitle = "Production") +
xlab("") +
rba_ylim(-20, 10) +
rba_syc(dup = F)+
theme( plot.subtitle = element_text(size = rel(1.3))) +
rba_rotate_x_text(angle = 90)
# Consumer sector
<- graph_data %>% filter(group == "consumer") %>%
one_year_consumer ggplot(aes( x = industry, y = estimate)) +
geom_col(position=position_dodge(), fill = rba[["blue3"]]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position=position_dodge(width = 0.9)) +
labs(subtitle= "Consumer") +
rba_theme() +
ylab("") +
xlab("") +
rba_ylim(-20, 10) +
rba_syc(dup = F)+
theme( plot.subtitle = element_text(size = rel(1.3))) +
rba_rotate_x_text(angle = 90)
<- one_year_business_services + one_year_production + one_year_consumer
patchwork + plot_annotation(
patchwork title = "Figure 10: Response of One-year-ahead ASX 200\nEarnings Growth Forecasts to Monetary Policy",
subtitle = "100 basis point monetary policy tightening, GICS sectors",
caption = "* Confidence intervals shown are 95 per cent and calculated using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv; Thomson Reuters",
theme = rba_theme() + rba_panel_text_size(title = 1.8, subtitle = 1.2))
<- output_earnings %>%
graph_data filter(industry != "asx200", horizon != "one year ahead", asymmetry == F, remove_gfc == F) %>%
unnest(reg_tidy ) %>%
::select(-contains("reg"), -data) %>%
dplyr::filter(model == "pc1_scaled", term != "(Intercept)")
dplyr
<- ggplot(graph_data %>% filter(group == "business_services"), aes( x = industry, y = estimate)) +
long_term_business_services geom_col(position = position_dodge(), fill = rba[["green1"]]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position=position_dodge(width = 0.9)) +
labs(subtitle = "Business Services") +
rba_theme() +
ylab("ppt") +
xlab("") +
rba_ylim(-5, 15) +
rba_syc(dup = F)+
theme( plot.subtitle = element_text(size = rel(1.3))) +
rba_rotate_x_text(angle = 90)
<- ggplot(graph_data %>% filter(group == "production"), aes( x = industry, y = estimate)) +
long_term_production geom_col(position=position_dodge(), fill = rba[["brown2"]]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position=position_dodge(width = 0.9)) +
rba_theme() +
ylab("")+
labs(subtitle = "Production") +
xlab("") +
rba_ylim(-5, 15) +
rba_syc(dup = F)+
theme( plot.subtitle = element_text(size = rel(1.3))) +
rba_rotate_x_text(angle = 90)
<- ggplot(graph_data %>% filter(group == "consumer"), aes( x = industry, y = estimate)) +
long_term_consumer geom_col(position=position_dodge(), fill = rba[["blue3"]]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position=position_dodge(width = 0.9)) +
labs(subtitle= "Consumer") +
rba_theme() +
ylab("") +
xlab("") +
rba_ylim(-5, 15) +
rba_syc(dup = F)+
theme( plot.subtitle = element_text(size = rel(1.3))) +
rba_rotate_x_text(angle = 90)
<- long_term_business_services + long_term_production + long_term_consumer
patchwork + plot_annotation(
patchwork title = "Figure 11: Response of Long-term ASX 200\nEarnings Growth Forecasts to Monetary Policy",
subtitle = "100 basis point monetary policy tightening, GICS setors",
caption = "* Confidence intervals shown are 95 per cent and calculated using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv; Thomson Reuters",
theme = rba_theme() + rba_panel_text_size(title = 1.8, subtitle = 1.2))
Figure 13 can be replicated by comparing the remove_gfc == T
specification to the baseline specification.
%>%
output_earnings filter(industry == "asx200", asymmetry == F) %>% # remove asymmetry results
mutate(type = case_when(remove_gfc ~ "Excluding GFC",
~ "Full sample"),
T horizon = forcats::fct_relevel(horizon, "one year ahead")) %>%
unnest(reg_tidy) %>%
filter(term == "pc1_scaled") %>% # only keep relevant coefficient
ggplot(aes(x = horizon, fill = type)) +
geom_col(aes(y = estimate), position = position_dodge()) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error), width = 0.2, position = position_dodge(width = 0.9)) +
ylab("ppt") +
xlab("") +
rba_theme(legend = T) +
rba_syc() +
rba_ylim(-6, 4) +
scale_fill_manual(values = c(rba[["orange9"]], rba[["green6"]])) +
labs(title = "Figure 13: Response of ASX 200 Earnings\nGrowth Forecasts to Monetary Policy",
subtitle = "100 basis point monetary policy tightening, by sample",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv; Thomson Reuters") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
Figure 15 can be replicated by extracting the asymmetry == T
results from the output_earnings
dataframe.
%>%
output_earnings filter(asymmetry == T, industry == "asx200") %>%
unnest(reg_tidy) %>%
filter(str_detect(term, "pc1_scaled") ) %>%
mutate(term = str_remove(term, paste0("pc1_scaled:")),
horizon = forcats::fct_relevel(horizon, "one year ahead")) %>%
ggplot(aes(x = horizon, fill = term)) +
geom_col(aes(y = estimate), position = position_dodge()) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width = 0.2, position = position_dodge(width = 0.9)) +
ylab("ppt") +
xlab("") +
rba_theme(legend = T) +
rba_syc() +
rba_ylim(-6, 4) +
scale_fill_manual(values = c(rba[["red2"]], rba[["blue7"]])) +
labs(title = "Figure 15: Response of ASX 200 Earnings\nGrowth Forecasts to Monetary Policy",
subtitle = "100 basis point monetary policy tightening,\nby montary policy surprise direction",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv; Thomson Reuters") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
Figure 17 can be replicated by extracting the pc1_scaled_move
model from the baseline specification options.
%>%
output_earnings filter(industry == "asx200", remove_gfc == F, asymmetry == F) %>%
unnest(reg_tidy) %>%
filter(term != "(Intercept)", model == "pc1_scaled_move" ) %>%
mutate(term = case_when(stringr::str_detect(term, "FALSE") ~ "No change",
TRUE ~ "Change"),
horizon = forcats::fct_relevel(horizon, "one year ahead")) %>%
ggplot(aes(x = horizon, fill = term)) +
geom_col(aes(y = estimate), position = position_dodge()) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width = 0.2, position = position_dodge(width = 0.9)) +
ylab("ppt") +
xlab("") +
rba_theme(legend = T) +
rba_syc() +
rba_ylim(-5, 2.5) +
scale_fill_manual(values = c(rba[["olive1"]], rba[["purple1"]] )) +
labs(title = "Figure 17: Response of ASX 200 Earnings\nGrowth Forecasts to Monetary Policy",
subtitle = "100 basis point monetary policy tightening,\nby if cash rate changed",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations, RBA, Refinitiv, Thomson Reuters") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
Following similar steps to section 4.1, we can perform the analysis on earnings forecasts changes for the augmented monetary policy surprises series (mp_surprises_all
).
First make the regression data.
<- read_csv(here::here("data", "earnings", "earnings-shock-all.csv")) earnings_surprise_all
# Make regression data
<- earnings_surprise_all %>%
reg_data_all mutate(forecast_date = date) %>%
group_by(group, industry, horizon) %>%
nest() %>%
mutate(earnings_data = data,
mp = map(earnings_data,
~ rolling_join(mp_surprises_all %>% mutate(date = decision_date),
%>% transmute(date = forecast_date, forecast_date, next_forecast_date, last_forecast_date),
{.} roll_type = T ) %>%
as_tibble() %>%
filter(decision_date > forecast_date , decision_date < next_forecast_date) %>% # decision between the two forecast dates
group_by(forecast_date, type) %>%
summarise_if(is.numeric, sum, na.rm = T) %>% # sum mp surprises if they match to the same forecast date e.g two speeches in a week.
ungroup ) ,data = map2(earnings_data, mp, ~ left_join(.x %>% filter(forecast_date %in% .y$forecast_date), .y, by = "forecast_date")))
Similar to section 3.2, we can estimate Equation 5 with the following:
<- reg_data_all %>%
output_earnings_all filter(industry == "asx200") %>%
mutate(reg_output = map(data, ~ lm(lead_d_1w ~ pc1_scaled:factor(type), data = .)),
reg_coeftest = map(reg_output , ~ lmtest::coeftest(., vcov = NeweyWest) ),
reg_tidy = map(reg_coeftest, broom::tidy))
Replicating Figure 8 with the above output can be done as follows:
%>%
output_earnings_all %>%
ungroup unnest(reg_tidy) %>%
filter(term != "(Intercept)") %>%
mutate(term = str_remove_all(term, "pc1_scaled:factor\\(type\\)") %>% str_replace_all(., "_", " "),
horizon = forcats::fct_relevel(horizon, "one year ahead")) %>%
ggplot(aes(x = term, y = estimate)) +
facet_wrap(~ horizon, ncol = 1) +
geom_col(position = position_dodge(), fill = rba[["pink4"]]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position = position_dodge(width = 0.9)) +
rba_rotate_x_text()+
rba_theme(legend = T, facet = T) +
rba_syc() +
ylab("ppt") +
xlab("") +
rba_ylim(-30, 30 )+
labs(title = "Figure 8: Response of ASX 200 Earnings\nGrowth Forecasts to Monetary Policy",
subtitle = "100 basis point monetary policy tightening, by communication type",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv; Thomson Reuters") +
rba_panel_text_size(title = 1.9, subtitle = 1.5)
Section 6 of the paper investigates if the monetary policy surprises can be predicted by economic news released before the monetary policy announcement. In this section, I regress the monetary policy surprises against economic news surprises (defined as the difference between the actual ABS first release of GDP, the unemployment rate and CPI and the Bloomberg market economist survey average expectation of these releases. )
To perform this analysis, first import the Bloomberg survey data and calculate the surprise between the first ABS release and the survey average (for each metric: GDP, CPI and the unemployment rate).
# Import Bloomberg survey surprises
<- read_csv(here::here("data", "economic-news", "bloomberg-surprises.csv")) %>%
economic_releases pivot_wider( names_from = metric, values_from = value) %>%
mutate(surprise_average = actual - survey_average)
economic_releases
## # A tibble: 461 x 11
## date_time date time country event ticker period units actual
## <dttm> <date> <tim> <chr> <chr> <chr> <chr> <chr> <dbl>
## 1 1997-01-29 11:30:00 1997-01-29 11:30 Austra~ CPI ~ AUCPI~ 4Q other 0.002
## 2 1997-04-23 11:30:00 1997-04-23 11:30 Austra~ CPI ~ AUCPI~ 1Q other 0.002
## 3 1997-07-23 11:30:00 1997-07-23 11:30 Austra~ CPI ~ AUCPI~ 2Q other -0.002
## 4 1997-10-22 11:30:00 1997-10-22 11:30 Austra~ CPI ~ AUCPI~ 3Q other -0.004
## 5 1998-01-28 11:30:00 1998-01-28 11:30 Austra~ CPI ~ AUCPI~ 4Q other 0.003
## 6 1998-04-29 11:30:00 1998-04-29 11:30 Austra~ CPI ~ AUCPI~ 1Q other 0.003
## 7 1998-07-22 11:30:00 1998-07-22 11:30 Austra~ CPI ~ AUCPI~ 2Q other 0.006
## 8 1998-10-28 11:30:00 1998-10-28 11:30 Austra~ CPI ~ AUCPI~ 3Q other 0.002
## 9 1999-01-28 11:30:00 1999-01-28 11:30 Austra~ CPI ~ AUCPI~ 4Q other 0.005
## 10 1999-04-28 11:30:00 1999-04-28 11:30 Austra~ CPI ~ AUCPI~ 1Q other -0.001
## # ... with 451 more rows, and 2 more variables: survey_average <dbl>,
## # surprise_average <dbl>
Next, to ensure the the economic releases occur between two monetary policy announcements, modify the monetary policy surprise object to add relevant other dates in other columns, then perform a rolling join with economic releases such that the economic news is appropriately allocated to a monetary policy surprise.
# Modify monetary policy surprises then perform a rolling join between the economic releases and monetary policy surprises
<- mp_surprises %>%
mp_surprises_modified mutate(meeting_date = case_when(year(decision_date) < 2008 ~ decision_date - 1,
~ decision_date),
T last_meeting = dplyr::lag(meeting_date),
date = meeting_date) %>%
select(date, decision_date, last_meeting, pc1_scaled )
<- rolling_join(economic_releases,
economic_releases_mp
mp_surprises_modified,roll = -Inf) %>%
filter(!is.na(pc1_scaled)) %>%
mutate(diff = as.numeric(decision_date - date)) %>%
group_by(event, decision_date) %>%
filter(diff == min(diff), diff > 0, date > last_meeting )
Similar to before, run the necessary regression row-by-row within the dataframe.
# Run Regressions
<- economic_releases_mp %>%
output_news_mp group_by(event) %>%
nest() %>%
mutate(reg_output = map(data, ~ lm(pc1_scaled ~ surprise_average, data = . )),
reg_coeftest = map(reg_output , ~ lmtest::coeftest(., vcov. = NeweyWest) ) ,
reg_tidy = map(reg_coeftest, ~ broom::tidy(.)))
You can now replicate Figure 5 as follows:
%>%
output_news_mp unnest(reg_tidy) %>%
filter(term == "surprise_average") %>%
mutate(event = str_remove(event, "QoQ|SA") %>% str_trim(.)) %>%
ggplot(aes(x = event, y = estimate))+
geom_col(position = position_dodge(), fill = rba["default6"]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position = position_dodge(width = 0.9)) +
rba_theme(legend = T) +
rba_syc() +
ylab("%") +
xlab("") +
rba_ylim(-10, 15 )+
rba_rotate_x_text(angle = 40)+
labs(title = "Figure 5: Monetary Policy Surprises\nand Economic News",
subtitle = "Response by economic news",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: ABS; Author's calculations; Bloomberg; RBA; Refinitiv") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
Section 7.1.1 looks at if economic news has any explanatory power over monetary policy surprises from speeches delivered by the Governor. This procedure follows a very similar one as above. The main difference being the analysis is only done on monetary policy surprises from speeches delivered by the Governor, and I now impose that the economic news must be within a 2 week window before the speech. This is necessary as speeches may occur less regularly meaning that the last economic release may have occurred long before the speech, by which point the news may be outdated.
# Take MP surprises from governor speeches
<- mp_surprises_all %>%
mp_surprises_all_modified filter(type == "speech_governor") %>%
mutate(meeting_date = case_when(year(decision_date) < 2008 ~ decision_date - 1,
~ decision_date),
T last_meeting = dplyr::lag(meeting_date),
date = meeting_date) %>%
select(date, decision_date, decision_date_time, last_meeting, pc1_scaled )
# Rolling join to combine with economic releases
<- rolling_join(economic_releases,
combined_data
mp_surprises_all_modified,roll = -Inf) %>%
filter(!is.na(pc1_scaled)) %>%
mutate(diff = as.numeric(decision_date - date)) %>%
group_by(event, decision_date) %>%
filter(diff == min(diff) , decision_date_time > date_time, diff <= 14 )
# Run Regressions
<- combined_data %>%
output_news_mp_gov group_by(event) %>%
nest() %>%
mutate(reg_output = map(data, ~ lm(pc1_scaled ~ surprise_average, data = . )),
reg_coeftest = map(reg_output , ~ lmtest::coeftest(., vcov. = NeweyWest) ) ,
reg_tidy = map(reg_coeftest, ~ broom::tidy(.)))
Figure 9 can replicated as follows:
%>%
output_news_mp_gov unnest(reg_tidy) %>%
filter(term == "surprise_average") %>%
mutate(event = str_remove(event, "QoQ|SA") %>% str_trim(.)) %>%
ggplot(aes(x = event, y = estimate))+
geom_col(position = position_dodge(), fill = rba["default6"]) +
geom_errorbar(aes(ymin = estimate - 1.96 * std.error, ymax = estimate + 1.96 * std.error),
width =0.2, position = position_dodge(width = 0.9)) +
rba_theme(legend = T) +
rba_syc() +
ylab("%") +
xlab("") +
rba_rotate_x_text(angle = 40)+
rba_ylim(-7.5, 5 )+
labs(title = "Figure 9: Monetary Policy Surprises\nand Economic News",
subtitle = "Speech by Govenor, response by economic news",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: ABS; Author's calculations; Bloomberg; RBA; Refinitiv") +
rba_panel_text_size(title = 1.8, subtitle = 1.2)
Section 8.5 in the RDP investigates the parameter stability of the estimates from the baseline output. This analysis can be conducted using the output generated earlier in objects output_asx200
and output_earnings
. I use the strucchange
package to test for breakpoints.
For the ASX 200 response to monetary policy the breakpoint tests are conducted as follows:
# Breakpoint test on ASX 200 ~ MP surprises
<- output_asx200 %>%
output_asx200_bp filter( asymmetry == F, remove_gfc == F) %>%
mutate(new_reg_data = map(reg_data, ~ {.} %>%
filter(!is.na(asx_shock), !is.na(pc1_scaled) ) %>%
mutate(n = 1:n()) ),
breakpoints = map(new_reg_data, ~ breakpoints(asx_shock ~ pc1_scaled, data = ., h = 0.1) ), # breakpoint test
breakdates = map2(new_reg_data, breakpoints, ~ {out <- .x$decision_date[.y$breakpoints]
if (anyNA(out)) out <- NA
return(out) } ), # extract breakpoints
breakdates_ci = map2(breakpoints, breakdates, ~ if (!anyNA(.y)) {
confint(.)$confint %>% as_vector()
else {NA} )) # extract confidence interval of breakpoints
}
# Tabulate any breakpoints
%>%
output_asx200_bp select( breakdates) %>%
mutate(breakdates = map_chr(breakdates, ~ paste(., collapse = ", ") )) %>%
::kable(caption = "ASX 200 - breakdates", align = 'c', position = "center") kableExtra
breakdates |
---|
NA |
A similar procedure can be followed for the earnings forecast output. Notice the dates are toward the month end, this is the initial forecast window with the board meeting occurring early in the next month (which is the datequoted in the main text.)
<- output_earnings %>%
output_earnings_bp filter( asymmetry == F, remove_gfc == F, industry == "asx200") %>%
mutate(new_reg_data = map(reg_data, ~ {.} %>%
filter(!is.na(lead_d_1w), !is.na(pc1_scaled) ) %>%
select(date, lead_d_1w , pc1_scaled) %>%
mutate(n = 1:n()) ),
breakpoints = map(new_reg_data, ~ breakpoints(lead_d_1w ~ pc1_scaled, data = ., h = 0.1) ),
breakdates = map2(new_reg_data, breakpoints, ~ {out <- .x$date[.y$breakpoints]
if (anyNA(out)) out <- NA
return(out) } ),
breakdates_ci = map2(breakpoints, breakdates, ~ if (!anyNA(.y)) {
confint(.)$confint %>% as_vector()
else {NA} ))
}
%>%
output_earnings_bp filter(industry == "asx200") %>%
select(industry, horizon, breakdates) %>%
mutate(breakdates = map_chr(breakdates, ~ paste(., collapse = ", ") )) %>%
pivot_wider(id_cols = c("industry", "horizon"), names_from = industry, values_from = breakdates) %>%
::kable(caption = "Earnings - breakdates", align = 'c', position = "center") kableExtra
horizon | asx200 |
---|---|
long term | NA |
one year ahead | 2009-01-29, 2010-05-27 |
Given the breakpoints found for the one year ahead forecasts, I perform break-adjustments in the baseline regression. The procedure and plot can be replicated as follows:
<- output_earnings_bp %>%
output_earnings_bp_adjustment filter(industry == "asx200", horizon == "one year ahead") %>%
$new_reg_data[[1]]} %>%
{.mutate(dummy_one = (date >= "2009-01-29" & date < "2010-05-27"), # add in dummies
dummy_two = (date >= "2010-05-27")) %>%
lm( lead_d_1w ~ pc1_scaled + pc1_scaled:factor(dummy_one) + pc1_scaled:factor(dummy_two), data = .) %>% # run regression
::coeftest(., vcov = vcovHAC) %>%
lmtest::tidy()
broom
%>%
output_earnings_bp_adjustment filter(term != "(Intercept)") %>%
mutate(term = case_when(str_detect(term, "dummy_one") ~ "Break 1" ,
str_detect(term, "dummy_two") ~ "Break 2",
TRUE ~ "Baseline") ,
ci_upper = estimate + 1.96 * std.error,
ci_lower = estimate - 1.96 * std.error) %>%
ggplot(., aes(x = term, y = estimate))+
geom_col(position = position_dodge(), fill = rba[["maroon1"]]) +
geom_errorbar(aes(ymin = ci_lower, ymax = ci_upper),
width =0.2, position = position_dodge(width = 0.9)) +
rba_theme(legend = T) +
rba_syc() +
ylab("ppt") +
xlab("") +
rba_ylim(-12, 3 )+
labs(title = "Figure 18: Response of One-year-ahead ASX 200\nEarnings Growth Forecasts to Monetary Policy",
subtitle = "Break adjusted",
caption = "* Confidence intervals shown are 95 per cent and calculated\n using Newey-West standard errors.
\nSources: Author's calculations; RBA; Refinitiv; Thomson Reuters") +
rba_panel_text_size(title = 1.5, subtitle = 1.3) +
rba_ylab_position(-7)
sessionInfo()
## R version 4.0.3 (2020-10-10)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 14393)
##
## Matrix products: default
##
## locale:
## [1] LC_COLLATE=English_Australia.1252 LC_CTYPE=English_Australia.1252
## [3] LC_MONETARY=English_Australia.1252 LC_NUMERIC=C
## [5] LC_TIME=English_Australia.1252
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] forcats_0.5.1 stringr_1.4.0 dplyr_1.0.5 purrr_0.3.4
## [5] tidyr_1.1.3 tibble_3.1.0 ggplot2_3.3.3 tidyverse_1.3.0
## [9] patchwork_1.1.1 kableExtra_1.3.4 strucchange_1.5-2 here_1.0.1
## [13] readr_1.4.0 broom_0.7.5 lubridate_1.7.10 sandwich_3.0-0
## [17] lmtest_0.9-38 zoo_1.8-9 data.table_1.14.0
##
## loaded via a namespace (and not attached):
## [1] Rcpp_1.0.6 svglite_2.0.0 lattice_0.20-41 assertthat_0.2.1
## [5] rprojroot_2.0.2 digest_0.6.27 utf8_1.2.1 cellranger_1.1.0
## [9] R6_2.5.0 backports_1.2.1 reprex_1.0.0 evaluate_0.14
## [13] highr_0.8 httr_1.4.2 pillar_1.5.1 rlang_0.4.10
## [17] readxl_1.3.1 rstudioapi_0.13 jquerylib_0.1.3 rmarkdown_2.7
## [21] labeling_0.4.2 webshot_0.5.2 munsell_0.5.0 compiler_4.0.3
## [25] modelr_0.1.8 xfun_0.20 pkgconfig_2.0.3 systemfonts_1.0.1
## [29] htmltools_0.5.1.1 tidyselect_1.1.0 fansi_0.4.2 viridisLite_0.3.0
## [33] withr_2.4.1 crayon_1.4.1 dbplyr_2.1.0 grid_4.0.3
## [37] jsonlite_1.7.2 gtable_0.3.0 lifecycle_1.0.0 DBI_1.1.1
## [41] magrittr_2.0.1 scales_1.1.1 cli_2.4.0 stringi_1.5.3
## [45] farver_2.1.0 fs_1.5.0 xml2_1.3.2 bslib_0.2.4
## [49] ellipsis_0.3.1 generics_0.1.0 vctrs_0.3.7 tools_4.0.3
## [53] glue_1.4.2 hms_1.0.0 yaml_2.2.1 colorspace_2.0-0
## [57] rvest_1.0.0 knitr_1.31 haven_2.3.1 sass_0.3.1