Skip Ribbon Commands
Skip to main content
 
ARCHIVED CONTENT

Applying a Structured Approach to Operational Risk Scenario Analysis in Australia

Information Paper

Emily Watchorn — September 2007

 

January 2013 - Update to this Information Paper

Please note that since this Information Paper was released in September 2007 there have been significant developments in operational risk management and measurement practices, and therefore the contents of this Paper should be considered in the light of these improvements. Authorised deposit-taking institutions, especially those with approval to use an Advanced Measurement Approach (as stated in paragraph 23 of APS 115), need to ensure where industry practices evolve and improve over time that these developments are assessed as part of their own practices. APRA will be releasing further guidance in relation to operational risk practices in the near future.

 

Copyright

The material in this publication is copyright. You may download, display, print or reproduce material in this publication in unaltered form for your personal, non-commercial use or within your organisation, with proper attribution given to the Australian Prudential Regulation Authority (APRA). Other than for any use permitted under the Copyright Act 1968, all other rights are reserved.

Requests for other uses of the information in this publication should be directed to
APRA Public Affairs Unit, GPO Box 9836, Sydney NSW 2001 or
public.affairs@apra.gov.au © Australian Prudential Regulation Authority (2007)
 
 

Disclaimer

While APRA endeavours to ensure the quality of this Publication, APRA does not accept any responsibility for the accuracy, completeness or currency of the material included in this Publication, and will not be liable for any loss or damage arising out of any use of, or reliance on, this Publication.

 

Inquiries

For more information on the contents of this publication contact:
Emily Watchorn,
Supervisory Support Division
Australian Prudential Regulation Authority
GPO Box 9836
Sydney NSW 2001
Tel: 61 62 9210 3000
 
 

Acknowledgements

Author: Emily Watchorn.

I wish to thank Harvey Crapp, George Efstathakis and André Levy for their helpful comments and suggestions.

 

 

Contents

 

Introduction

Banks intending to apply the Advanced Measurement Approach (AMA) to Operational Risk are required to use scenario analysis as one of the key data inputs into their capital model.1 Scenario analysis is a forward-looking approach, and it can be used to complement the banks’ short recorded history of operational risk losses, especially for low frequency high impact events (LFHI).

A common approach taken by banks2 is to ask staff with relevant business expertise to estimate the frequency and impact for the plausible scenarios that have been identified. A range of techniques is available for eliciting these assessments from business managers and subject matter experts, each with its own strengths and weaknesses. More than thirty years of academic literature is available in the area of eliciting probability assessments from experts. Much of this literature is informed by psychologists, economists and decision analysts, who have done research into the difficulties people face when trying to make probability assessments. The literature provides insight into the sources of uncertainty and bias surrounding scenario assessments, and the methods available for their mitigation.

The purpose of this paper is to increase awareness of the techniques that are available to ensure scenario analysis is conducted in a structured and robust manner. Banks should be aware of the variety of methods available, and should consider applying a range of techniques as appropriate.

Subject Matter Experts or Statisticians

Expertise in a subject matter is not the same as expertise in probability and statistics. Participants in AMA scenario workshops are usually business unit managers and other staff with relevant business experience, typically chosen by the bank’s Group Operational Risk (GOR) function. Jenkinson (2005, p1) classifies these individuals as substantive experts, reflecting their expertise in the subject matter under consideration. Alternatively, normative experts are those who are trained in elicitation methods or have good numeracy skills.

Kadane and Wolfson (1998, p3) assert that the goal of elicitation is “to make it as easy as possible for subject matter experts to tell us what they believe, in probabilistic terms, while reducing how much they need to know about probability theory to do so”. The structure of scenario analysis workshops, and the design and wording of the questions asked, will determine how much the participants will need to know about statistics and probability to complete their assessments meaningfully.

Trying to specify one’s beliefs in probabilistic terms can be a difficult task, and it is not always easy to avoid being illogical or inconsistent. Normative experts can play a role in identifying these inconsistencies by facilitating the scenario workshops. Banks should include such normative experts throughout the scenario analysis process to ensure that consistent and robust estimates are made.

Judgemental Biases

Conscious or subconscious discrepancies between a participant’s responses and an accurate description of their underlying knowledge are termed ‘biases’ (Spetzler and Von Holstein, 1975, p345).

  • Subconscious, or judgemental, biases arise as a consequence of the limitations in one’s memory or information processing capacity.
  • Conscious, or motivational, biases arise when the participant has an interest in influencing the results of the analysis.

The following discussion covers some of the biases identified in the literature that have become evident in banks’ current approaches to AMA scenario analysis. Banks can explore these biases in an attempt to elicit scenario assessments that more accurately reflect the experts’ underlying beliefs about future outcomes.

Availability Bias

Availability refers to the ease with which relevant information is recalled or visualised. For example, one may assess the probability of developing cancer by recalling how many of their acquaintances have suffered from the illness. In scenario analysis, the availability bias can arise in the process of identifying the relevant scenarios to be assessed. Most banks have acknowledged that not every plausible scenario will be identified, and they have recognised the use of external loss data in inspiring scenarios that might have otherwise been overlooked.

Availability can also affect the frequency assessments. The likelihood or frequency of an event may be overstated if a relevant event is in close proximity; that is, if the event has occurred recently in the market the participants operate in, or if it has been personally experienced by them. For example, one may overestimate the risk of an office fire if they have personally experienced a fire, since it is readily available in their memory. Conversely, the likelihood may be underestimated if the participants have not previously experienced the event. The extent of this understatement will depend on the ease with which the participants can imagine the situation, and it can be influenced by the level of detail with which the scenario is described or discussed in the workshop.

Anchoring

When people make quantitative estimates, they start from an initial value and then adjust it to yield their final answer (Kahnemann and Tversky 1974, p1128). Consequently, different starting points will lead to different estimates. Hammond, Keeny and Raiffa (1998, p48) conducted an experiment to illustrate the anchoring trap. They asked one group of people the following two questions:

  1. Is the population of Turkey greater than 35 million?
  2. What is your best estimate of Turkey’s population?

They then asked a different group of people the same two questions, but replaced ‘35 million’ with ‘100 million’ in the first question. Their results showed that the anchor created by the first question significantly affected people’s answers to the second question. The second group consistently estimated Turkey’s population to be millions higher than the first group.

In essence, anchoring is a particular instance of the availability bias. The information contained in the assessment questions is readily available to the expert, and it may overshadow their prior knowledge and distort their underlying beliefs.

A common starting point for AMA scenario analysis has been the review of external loss data. While use of external data can help to overcome the availability bias in identifying or imagining scenarios, the provision of external frequency and impact statistics can act as an anchor for the participants’ estimates.

In extreme cases, the participants will not adjust from the anchor at all. This will often occur when there is a high level of uncertainty surrounding the assessment. It has been particularly noticeable in participants’ frequency assessments. It is important that banks recognise the implications of participants replicating external loss data for scenario analysis, as their models often assume that scenario analysis results are independent of the external loss data.

Participants may not appreciate the uncertainty surrounding the use of external data, caused by factors such as small sample size, irrelevance and reporting bias. In APRA’s review of AMA applications in Australia, scenario workshop participants have expressed the difficulty they have faced in finding relevant examples from external data, and in some cases they have been unable to find relevant data at all. Notwithstanding this, participants have often duplicated the external data as their own subjective assessments.

Subject matter experts are likely to have underlying knowledge and beliefs about the quantities they are being asked to assess, and they may be capable of estimating these quantities instinctively, without the help of data. To ensure that the external data provided does not deemphasise that knowledge, banks could consider recording the experts’ assessments both before and after the data is made available to them. The experts should be asked to provide justification for any changes made to their assessments in light of the data provided.

If statistical data is not made available, experts will resort to a different anchor. Often this will be their subjective ‘best estimate’ of the quantity in question, since this is usually the next most available piece of information. Anchoring on the best estimate is an important bias to be aware of for impact assessments, particularly if the participants are being asked to estimate an extreme value such as a worst case or a 90th percentile. The best estimate is a more central value, and the upward adjustment required to reach an extreme value may be insufficient. This means that the extreme values will be understated, and the resulting distribution will be too tight (Kahnemann and Tversky, 1974, p1129).

A technique available to overcome this bias is to ask the participants what they consider to be extreme values for the impact, and then ask for scenarios that might lead to outcomes outside of those extremes. Eliciting such extreme values makes them available to the participants (deliberate use of availability) and can help to overcome the insufficient upward adjustment from the central best estimate anchor (Spetzler and Von Holstein, 1975, p354).

Motivational bias

Motivational biases arising in scenario analysis can lead to the understatement of frequency and impact assessments, overstatement of the effectiveness of controls, and understatement of the uncertainty surrounding the assessments made. Hillson and Hullet (2004, p3) note that motivational biases are particularly evident among more senior managers.

Normative experts or the workshop facilitators can explain to the workshop participants that scenario analysis is not a projection, and admitting that these large losses can occur is not a reflection of the manager’s performance. Managers may be uncomfortable making assessments at extreme percentiles. They may not want to acknowledge that they could ever incur such large losses as it reflects badly on their controls. Managers may feel that admitting the fallibility of controls can give the perception of deficient risk management practices, and they often become defensive when asked to estimate extreme losses for their business unit.

Motivational bias can affect the participants’ estimate of the uncertainty surrounding their assessments. An expert may conceal the full extent of the uncertainty that they feel, because they presume that someone in their position is expected to know, with a high degree of certainty, what could happen in their area of expertise (Spetzler and Von Holstein, 1975, p345).

Managers also have an incentive to understate potential losses in order to reduce the capital that is allocated to their business unit. Some banks have made scenario estimates transparent across the organisation, so that they are essentially subject to peer review, in addition to the formal challenge process.

Uncertainty or Variability

Workshop participants may become overly focussed on making point estimates to quantify frequency and impact, and lose sight of the inherent variability surrounding these quantities. Extreme operational risk events are random by their nature, and business experts should not expect to be able to predict their frequency and impact with a high degree of accuracy. It may be useful for normative experts to introduce participants to the notion of stochastic and deterministic quantities3, and explain that there is a range of possible outcomes for the frequency and impact associated with each scenario.

Daneshkhah (2004, p1-2) provides a distinction between two sources of uncertainty that come into play during elicitation:

  • Aleatory uncertainty arises from natural, unpredictable variation in the quantity under consideration. It is also described as irreducible uncertainty, since it cannot be reduced by further investigation;
  • Epistemic uncertainty is due to a lack of knowledge about the quantity to be estimated. It is also called reducible uncertainty since it can be reduced with sufficient study or the knowledge of experts.

Anderson and Hattis (1999, p47) discuss the concept in terms of uncertainty and variability:

  • Variability is an objective property representing heterogeneity in a population, and is irreducible by additional measurements;
  • Uncertainty refers to partial ignorance or imperfect knowledge on the part of the expert, and may be reduced by further measurement.

One way a bank can attempt to separate these sources of uncertainty is to ask the experts to:

  • describe what they believe to be the natural variability of the frequency and impact distributions (aleatory uncertainty);
  • express how confident or uncertain they feel about the quantities they have estimated (epistemic uncertainty).

The distinction between epistemic and aleatory uncertainty can be useful since epistemic uncertainty can be reduced. Oakley and O’Hagan (2004, p240) refer to the use of sensitivity analysis in identifying the scope for reducing uncertainty due to epistemic sources.

Likelihood Assessments

Estimates of likelihood can be particularly problematic when considering rare events (Spetzler and Von Holstein, 1975, p351-352). Participants may find it difficult to distinguish between a .001 likelihood and a .0001 likelihood, and, to aggravate the situation, there is less data for the participants to consider. The difficulties involved in estimating the likelihood of rare events make the participants prone to a number of estimating biases, and hence the assessments are less reliable (Hillson and Hullet, 2004, p2).

Anchoring on external loss data is particularly noticeable in participants’ likelihood assessments. Despite the fact that the data may be scanty, unreliable or outdated, scenario workshop participants have shown over-reliance on the external likelihood data, to compensate for the difficulties of estimating the probabilities instinctively. Banks need to understand and manage these sources of biases if realistic and useful assessments of likelihood are to be made.

Hillson and Hullet (2004, p2-4) discuss the importance of using a meaningful scale to assess probability. Severity assessments are measured in dollars, which is an unambiguous and familiar scale. Probabilities can be assessed using alternative terms such as frequency, likelihood, or chance, which can sometimes cause confusion. The scale can be described using labels (low, medium, high), phrases (impossible, probable, likely), odds (1:50, 1:10), numbers (percentages 1%, or decimals .01) or ranges (<1%, 1-5%, 5-10%). Each method has its own strengths and weaknesses. Labels and phrases can be interpreted subjectively, odds may be difficult to order, specific percentage or decimal values introduce spurious accuracy, and fixed ranges are artificial and may not reflect the true range of probability for a given risk.

Most banks have defined and asked for the ‘frequency’ of events occurring more than once per year (n events per year), and the ‘likelihood’ of events occurring less than once per year (1 in n years). The ‘1 in n years’ is then converted into a percentage for use in modelling as the likelihood of the event occurring over a one-year horizon. It may be useful to verify the participants’ assessment of likelihood over an n-year horizon by asking them whether the percentage likelihood for a one-year horizon agrees with their beliefs.

Banks have generally chosen not to use the raw frequency assessments from scenario workshops in their capital models. Instead, they have calibrated or over-ridden the scenario frequencies with internal or external frequency statistics. Their justification has been that the estimated frequencies, when aggregated, are irrational and grossly misaligned with internal and external data sources. Where banks have adjusted or over-ridden the frequencies in the modelling, they should challenge or reassess the individual scenario assessments, so that the estimates are reasonable both in isolation and in aggregate.

Severity Assessments

The severity of each scenario will have a range of possible values. Normative experts should discuss this concept with workshop participants to get them thinking in terms of probability distributions. Kahnemann and Tversky (1974, p1129) discuss two different ways of eliciting subjective probability distributions:

  1. asking the expert to select values that correspond to specified percentiles of the underlying probability distribution;
  2. asking the expert to assess the probabilities that the true value will exceed some specified values.

Banks could consider using one method as their primary method, and using the other method to check the participants for consistency.

Under the first method, banks must decide which percentiles they will ask the experts to assess. Examples include the median, 90th, 95th, 99th and 99.9th percentiles. Some banks have asked participants to assess a “worst case” value, where the worst case is statistically defined as one of the extreme percentiles. Other measures that can be elicited include the mean and the mode. The mean requires consideration of the entire distribution, while the mode, the most frequent value, allows the assessment to be made without considering extreme values. The role of the normative expert becomes important in overseeing the assessment of these statistical values, since the workshop participants may misinterpret what is being asked of them if they lack sound statistical knowledge.

Banks have the option of asking participants to make their assessment as a point estimate or to select from a number of ranges where they believe the severity is likely to belong. Point estimates can introduce spurious accuracy and the expert may become uncomfortable in attempting to assess a highly uncertain value with such precision. Asking for the assessment in buckets or ranges can be useful, depending on how the ranges are constructed.

Hobbs and Kreinovich (2001) assert that people feel uncomfortable making a choice on a more detailed scale, with finer granules, or on a coarser scale, with wider ranges, and that there is an optimal choice of granularity that can be used to elicit people’s estimates4. Artificial ranges that are set by some arbitrary formula may not be useful for the expert to assess a quantity meaningfully. If the ranges are too small, the expert may believe the quantity could span a number of adjacent ranges. If the ranges are too wide, the expert may not believe that the quantity would ever span the entire range they have had to select. This can undermine the approaches which use a uniform distribution, bounded by the endpoints of each range, to simulate losses based on the expert’s opinion for the capital calculation.

Oakley and O’Hagan (2004, p244) contend that the expert is unlikely to believe that the quantity is uniformly distributed over a large range, and that limiting the elicitation to a wide interval selection is under-using access to experts. If the expert has more information than what has been elicited, banks should consider the possibility that they are throwing away valuable information.

Using Multiple Experts

Scenario assessments can be made by a single expert or a number of experts. Different experts may have differing points of view, and the bank needs to decide how to combine their estimates. Jenkinson (2005, p27) describes two types of method that can be used to combine multiple assessments:

  • The mathematical approach, in which the assessments are first elicited individually and then aggregated using a weighting approach. The weights can be determined based on the level of expertise of each expert. The method should consider the lack of independence in the opinions of the experts, e.g. due to common knowledge.
  • The behavioural approach, which asks a group of experts to share information between themselves, and establish a consensus assessment.

Oakley and O’Hagan (2004, p247) express a preference for the behavioural approach. They consider whether a mathematical or weighted average opinion really represents anyone’s opinion at all. The behavioural approach is also better aligned to the Basel II Framework, as it promotes discussion of operational risk between workshop participants, and it increases awareness of risk exposures for management purposes. Clemen and Winkler (2005) and Ariely et al (2000) provide a detailed discussion of aggregating probability distributions and averaging probability estimates between experts.

Challenge and Validation

The practical difficulties associated with elicitation can cause the assessments to be poorly calibrated or internally inconsistent (Clemen and Fox, 2005, p2, and Oakely and O’Hagan, 2004, p241). The challenge and verification process can identify and correct assessments that are illogical, inconsistent, or incoherent. Challenge can take place during the workshop if multiple experts challenge each other, or from the normative expert acting as facilitator. Challenge can also take place outside the workshop by those who are higher up in the organisation (such as Directors or Executive General Managers), deeper down in the organisation (topic experts, for example from IT or HR), and those who span all across the organisation (such as the Group Operational Risk (GOR) function).

The ‘Scenario-based AMA Working Group’ (2003, p4) identifies five techniques that can be used in validation of scenario assessments:

  • the “two pairs of eyes principle”
  • internal audit of the risk assessment process
  • comparison of actual losses against experts’ expectations
  • comparison of the outcome of scenario assessments against internal audit findings
  • challenge by Group functions such as Risk

Kadane and Wolfson (1998, p4) contend that frequent feedback should be given to the expert during the elicitation process. Feedback describing the elicited distribution allows the expert an opportunity refute any features introduced by the analyst (Oakley and O’Hagan, 2004, p246). The responses can be checked for consistency, and to see if the expert really believes them. A graphical or visual representation of the elicited distribution can also be a useful test for validation (Spetzler and Von Holstein, 1974, p355).

Spetzler and Von Holstein (1974, p356) assert that having the expert assign a probability distribution without the help of an analyst often leads to poor assessments, even for experts who are well trained in probability or statistics. A normative expert can play a role in identifying any biases affecting the assessments. Normative experts can look for and challenge inconsistencies, and can reveal which information is most available, any anchors that are being used, and whether any implicit assumptions are being made.

Importance of Documentation

Thorough documentation of the scenario assessment workshops provides transparency for third parties to evaluate how the assessments were reasoned. This can be useful for the challenge and validation process described above, and for the reassessment of those scenarios at future workshops.

The documentation provides a means of checking for consistency between business units running separate scenario assessment workshops. If sufficiently detailed, the documentation can allow third parties to identify any biases or misinterpretation of the questions being asked, that may have been overlooked during the workshop.

The extent of the documentation required has varied across banks. Where the recording of justification for assessments has not been enforced, it has often been left blank or has provided insufficient detail for a third party to assess the rationale behind an estimate. Particularly, the documentation has often lacked any evidence of discussions surrounding the participants’ interpretation of the statistical terms, such as a ‘worst case’ or extreme percentile value. The documentation has not included an adequate justification for why an assessment is believed to represent a certain percentile. This makes it difficult for third parties to assess whether the participants have understood the exercise, and whether they have made assessments consistent with the questions being asked.

Conclusion

Banks use scenario analysis to evaluate their exposure to operational risks, particularly those low-frequency/ high-impact events for which they lack sufficient relevant data. Most banks have only conducted a few ‘rounds’ of scenario analysis for their operational risk measurement system, leaving further scope for improving the design and conduct of the process.

Academic literature is available covering a wide range of issues relating to scenario analysis and eliciting subjective probability assessments from experts. The literature provides a rich discussion of opportunities to enhance the robustness of the scenario assessment process. Further guidance can be sought from reviewing the use of scenario analysis in other industries, including medicine, nuclear energy, veterinary science, agriculture, meteorology, business studies and economics (see Jenkinson, 2005).

The use of a facilitator with elicitation skills can assist the subject matter experts in interpreting statistical terms and concepts, and can identify any inconsistencies or biases that may arise. Thorough documentation provides transparency for third parties to interpret the rationale behind the assessments, and allows the evaluation of consistency within and across assessments.

Banks can improve their scenario analysis process by increasing their awareness of elicitation biases and uncertainties. Action can be taken to counteract assessment biases and to mitigate uncertainty, and as a result, banks can elicit scenario assessments that more accurately reflect experts’ underlying beliefs about future outcomes.

 

References

Alderweireld, T., Garcia, J. and Leonard, L., 2006, ‘A Practical Operational Risk Scenario Analysis Quantification’, Risk, February 2006, pp93-95.

Anderson, E. and Hattis, D., 1999, ‘Uncertainty and Variability’, Risk Analysis, Vol. 19, No. 1.

Ariely, D., Au, W., Bender, R., Budescu, D., Dietz, C., Gu, H., Wallsten, T. and Zauberman, G., 2000, ‘The Effects of Averaging Subjective Probability Estimates Between and Within Judges’, Journal of Experimental Psychology, Vol. 6, No. 2, pp130-147.

Australian Prudential Regulation Authority (APRA), 2007, Draft Prudential Standard APS 115 Capital Adequacy: Advanced Measurement Approaches to Operational Risk, June 2007.

Basel Committee on Banking Supervision, 2006, ‘International Convergence of Capital Measurement and Capital Standards’, Bank for International Settlements, June 2006.

Clemen, R. and Fox, C., 2005, ‘Subjective probability assessment in decision analysis: Partition dependence and bias toward the ignorance prior’, Management Science, Vol. 51, No. 9, September 2005, pp1417-1432.

Clemen, R. and Ulu, C., 2007, ‘Interior Additivity and Subjective Probability Assessment of Continuous Variables’, Draft, Duke University, Durham, N.C.

Clemen, R. and Winkler, R., 2005, ‘Aggregating Probability Distributions’, Draft, Duke University, Durham, N.C.

Daneshkhah, A., 2004, ‘Uncertainty in Probabilistic Risk Assessment: A Review’, Bayesian Elicitation of Experts’ Probabilities (BEEP) Working Paper, University of Sheffield, U.K.

Fox, C. and Rottenstreich, Y., 2003, ‘Partition Priming in Judgement Under Uncertainty’, Psychologocial Science, Vol. 14, No. 3, May 2003, pp195-200.

Gosling, J., Oakley, J. and O’Hagan, A., 2007, ‘Non-parametric elicitation for heavy-tailed prior distributions’, Bayesian Analysis, Vol. 2, No. 2, pp1-26.

Hammond, J., Keeny, R. and Raiffa, H., 1998, ‘The Hidden Traps in Decision Making’, Harvard Business Review, September-October 1998, pp 47-58.

Hillson, D. and Hullet, D., 2004, ‘Assessing Risk Probability: Alternative Approaches’, Proceedings of PMI Global Congress 2004 EMEA, Prague, Czech Republic.

Hillson, D., 2005, ‘Describing Probability the limitations of natural language’, Proceedings of PMI Global Congress 2005 EMEA, Edinburgh, UK.

Hobbs, J. and Kreinovich, V., 2001, ‘Optimal Choice of Granularity in Commonsense Estimation: Why Half-Orders of Magnitude’, Proceedings, Joint 9th IFSA World Congress and 20th NAFIPS International Conference, Vancouver, British Columbia, July 2001, pp. 1343-1348.

Jenkinson, D., 2005, ‘The Elicitation of Probabilities – A Review of the Statistical Literature’, Bayesian Elicitation of Experts’ Probabilities (BEEP) Working Paper, University of Sheffield, U.K.

Kadane, J. and Wolfson, L., 1998, ‘Experiences in Elicitation’, The Statistician, Vol. 47, Part 1, pp3-19.

Kahneman, D. and Tversky, A., 1974, ‘Judgement under Uncertainty: Heuristics and Biases’, Science, Vol. 185, September 1974, pp. 1124-1131.

Lau, A. and Leong, T., ‘PROBES: A Framework for Probability Elicitation from Experts’, Department of Computer Science, School of Computing, National University of Singapore.

Marnay, C. and Siddiqui, A., 2006, ‘Addressing an Uncertain Future Using Scenario Analysis’, Ernest Orlando Lawrence Berkeley National Laboratory, University of California.

Oakley, J. and O’Hagan, A., 2004, ‘Probabilistic sensitivity analysis of complex models: a Bayesian approach’, Journal of the Royal Statistical Society, Series B, Vol. 66, Part 3, pp. 751-769.

Oakley, J. and O’Hagan, A., 2004, ‘Probability is perfect, but we can’t elicit it perfectly’, Reliability Engineering and System Safety, Vol. 85, 2004, pp239-248.

Renooij, S. and Witteman, C., 1999, ‘Talking probabilities: communicating probabilistic information with words and numbers’, International Journal of Approximate Reasoning, Vol. 22, 1999, pp169-194.

Scenario based AMA working group, 2003, ‘Scenariobased AMA’, May 2003, Federal Reserve Bank of New York. www.ny.frb.org/newsevents/events/banking/2003/con0529d.pdf

Spetzler, C. and Von Holstein, C. S., 1975, ‘Probability Encoding in Decision Analysis’, Management Science, Vol. 22, No. 3, Novermber 1975, pp340-357.

Tichy, G., 2002, ‘Over-Optimism Among Experts in Assessment and Foresight’, Institute of Technology- Assessment manuscript, Austrian Academy of Sciences, Vienna, Austria.

Weigmann, D., 2005, ‘Developing a methodology for eliciting subjective probability estimates during expert evaluations’, Aviation Human Factors Division, Institute or Aviation, University of Illinois.

Wilson, A., 1994, ‘Cognitive factors affecting subjective probability assessment’, Institute of Statistics and Decision Sciences (ISDS) Discussion Paper #94-02, Duke University, Durham, N.C.

 

 

 

1 APRA (2007, Attachment B, Paragraph 3, p13), and Basel Committee on Banking Supervision (2006, Paragraph 665, p150).

2 ‘Banks’ in this paper refers to those institutions who have applied to APRA for use of the Advanced Measurement Approach (AMA) to Operational Risk.

3 A deterministic solution considers only one possible realisation of how a process might evolve over time. A stochastic process allows for the underlying randomness of the process by using probability distributions to represent a range of possible outcomes.

4 They show that the optimal granularity is in using half-orders of magnitude, i.e. that the next range starts 100.5, or roughly 3-4 times, after the previous one.