# RDP 2024-02: Valuing Safety and Privacy in Retail Central Bank Digital Currency 2. Survey Question for the Discrete Choice Experiment

Readers of this paper might not be familiar with the discrete choice experiment technique, as it is used rarely in the literature on money and central banking.[3] We offer an introduction here, as it pertains to our specific application, and highlight important features and our rationale along the way. This necessarily involves a heavy focus on the survey question that forms the basis for the discrete choice experiment, since the question wording has a significant impact on the validity of our conclusions (as is the case with all surveys). The statistical methods used for the analysis of discrete choice experiment responses are typically far less controversial. For readers looking for a comprehensive outline of the discrete choice experiment technique, we recommend Fiebig and Hall (2005).

We asked CPS respondents the following question, randomising the entries in the table shown to each respondent. The full set of choices is shown in the table below, with the respondent only seeing one option for each account.

There is a debate about whether people should be allowed to have bank accounts at the Reserve Bank of Australia, which is government-owned. People would be able to access their money using mobile phones, computers, or cards, just like they can at other banks already offering bank accounts.

This question seeks to understand how much you would value this option.

Assume that you are opening a new bank account. You have found two options, both offering the same functionality for making withdrawals, deposits, and electronic payments. The only differences are described in the table below.

Which account looks more attractive to you?

Account A Account B

What is the account fee?
(Each cell contains 1 of 2 possible entries, randomised)

1. [$20] or 2. [$25] per year
1. [$20] or 2. [$25] per year

Who provides the account and is responsible for protecting the money in it?
(Each cell contains 1 of 2 possible entries, randomised)

1. [The Reserve Bank of Australia]

or

2. [One of the large banks already offering accounts in Australia]

1. [The Reserve Bank of Australia]

or

2. [One of the large banks already offering accounts in Australia]

Who could potentially access my transaction data?
(Each cell contains 1 of 4 possible entries, randomised)

1. [No-one. The transactions are encrypted and anonymous.]

or

2. [Australia's financial crime authority only]

or

3. [Only {insert account providing entity}]

or

4. [Only Australia's financial crime authority and {insert account providing entity}]
1. [No-one. The transactions are encrypted and anonymous.]

or

2. [Australia's financial crime authority only]

or

3. [Only {insert account providing entity}]

or

4. [Only Australia's financial crime authority and {insert account providing entity}]

Several features of the question warrant explanation:

• Preamble. The first paragraph of the question is a preamble designed to convey that answers given by survey respondents are consequential. The intention is to discourage respondents from giving little or no thought to their answer, on account of perceptions that there is nothing much at stake. Preambles performing this purpose are central features of discrete choice experiments and are often far more extensive than a single paragraph of text (the survey question underlying Bishop et al (2017) is a good example). We use a short preamble to accommodate concerns about respondent fatigue in the CPS (which has 46 other questions in the module containing the experiment), relying more on the RBA branding and introductory material of the survey to convey the importance of responses. We also check the attentiveness of survey respondents, using a technique described in Section 4.2.
• Bank account analogy. It is challenging to write a question about CBDCs without taking a lot of time to explain to survey participants what CBDCs are. For this reason, we avoid the term CBDC altogether, instead relying on more accessible analogies with bank accounts. We have set up the question as a choice between two new bank accounts, rather than having one of them be a respondent's existing account, to give us full flexibility over the account characteristics, and to protect our results from distortions arising from the inconvenience of shifting accounts.
• Cash availability. The Australian Government is committed to ensuring Australians maintain adequate access to physical currency (Australian Government 2023). By setting up the bank account scenario as a choice to be made today, our question deliberately implies that physical currency continues to be available. This matters for the interpretation of our results because cash offers an alternative option that is also a central bank claim and offers high levels of transaction privacy.
• Randomisation. The randomisation of the account characteristics is a strategy aimed at supporting credible statistical analysis, in the same way that medical trials or other controlled trials tend to randomise the allocation of treatments to experimental subjects. Randomisation of this kind is a central feature of the discrete choice experiment technique.
• Fees. We include fees in the account characteristics, so that we can communicate our results in terms of dollar amounts that respondents would be willing to pay to switch account characteristics. Unlike other common measures of preferences in the CBDC literature, willingness to pay has an objective interpretation, and a format that is useful for cost-benefit analysis. We construct these willingness-to-pay estimates using statistical techniques outlined in Section 4.

Expressing preferences in terms of willingness to pay is another central feature of the discrete choice experiment technique. An associated challenge is that any differences in fees seen by survey participants must be set to levels that do not dwarf, or are not dwarfed by, the valuations put on the differences in other product characteristics. Otherwise, the researcher encounters scenarios in which, say, survey participants always choose the cheapest products, no matter what the other product differences are. In that case it becomes impossible to identify the average willingness to pay to switch product characteristics. It is therefore common to calibrate fees with a pilot survey. We conducted two pilots: one on internal RBA staff and another via our survey provider. Both pilots were also used to identify opportunities to simplify the language of the survey question.

• Privacy possibilities. Our privacy options do not capture the full range of entities that might see data in transactions involving commercial bank deposits. The exact set of entities typically depends on the payment system used for transacting (see Amiri et al (2023) for a useful discussion). Our privacy options do not capture the complete set of possibilities for retail CBDC either. Our simpler list of options is designed to improve statistical power and streamline our survey question, while retaining enough resolution to usefully inform policymakers.
• Anchoring. Since each participant chooses between the accounts displayed, they reveal only whether their valuation of the combined difference in safety and privacy characteristics is higher or lower than the difference in fees. There is no opportunity for individuals to offer their exact valuation. This higher/lower set-up is a deliberate feature of discrete choice experiments, to circumvent any potential problems associated with ‘anchoring’. For example, a respondent's latent valuation of the difference in safety and privacy might be $13 in favour of account A, and seeing it is only$5 more expensive might drag their perceived valuation down. But unless their perceived valuation crosses to below \$5 after seeing that is the fee difference (unlikely), the respondent will answer the question the same way irrespective of whether they have been anchored or not. It is partly for their effectiveness at handling anchoring issues that discrete choice experiments are relied on for some high stakes research scenarios, such as calculating legal damages arising from oil spills for lost environmental amenity (Bishop et al 2017).[4]

## Footnotes

On the topic of Australian payments, one example is Lam and Ossolinski (2015), who estimate willingness to pay surcharges to use debit cards and credit cards, rather than cash. [3]

Another way to circumvent anchoring issues is to ask respondents to offer a valuation, without giving them any sense of what a reasonable response might be, such as bins in which to place their valuation. But this technique tends to cause indecision in survey respondents (see Carson and Hanemann (2005) for further discussion). [4]