The 5 Ws and H of IFRS 17 (Part 2)

Previously, we talked about the 5 Ws of IFRS 17. This blog post (Part 2) will discuss the H: How does IFRS 17 replace IFRS 4?

A Consistent Model

Figure 1: The components that make up IFRS 17 insurance contract liabilities.[1]

IFRS 17 introduces the General Measurement Model (GMM) to calculate insurance contract liabilities for all insurance and reinsurance contracts. It is made up of three components:

  1. Present Value of Future Cash Flows (PVFCF)
    1. Expected future cash flow – The current estimates of cash inflows and cash outflows e.g., premiums, claims, expenses, acquisition costs, etc.
    2. Discount Rates – Current market discount rates, which are used to normalize the present value of expected future cash flows.
  2. Risk Adjustment (RA) – The compensation a company requires for bearing insurance risk. Insurance risk is a type of non-financial risk and may consist of the uncertainty of cash flows, the timing of cash flows, or both.
  3. Contractual Service Margin (CSM) – The equal and opposite amount to the net cash inflow of the two previous components. This ensures there is no day-one profit recognized in Profit or Loss (P&L) for all contracts.

 

More Transparent Information

Figure 2: How IFRS 17 recognizes profit in P&L.[2]

IFRS 17 only allows insurers to recognize profit once insurance services are provided. This means that insurers can no longer recognize the premiums they receive as profit in P&L. Rather, at the end of each reporting period, insurers will report the portion of the CSM remaining as Insurance Revenue after they fulfil obligations such as paying claims for insured events.

Insurance Service Expenses reflect the costs incurred when fulfilling these obligations for a reporting period. This consists of incurred claims and expenses, acquisition costs, and any gains or losses from holding reinsurance contracts. The net amount of Insurance Revenue and Insurance Service Expenses make up the Insurance Service Result. This approach differentiates the two drivers of profit for the insurer: Insurance Revenue and Investment Income. Investment Income represents the return on underlying assets of investment-linked contracts, and Insurance Finance Expenses reflects the unwinding and changes in discount rates used to calculate PVFCF and CSM.

Better Comparability

Figure 3: A comparison of IFRS 4 and IFRS 17.[3]

Regarding presentation of financial statements, IFRS 17 requires more granularity in the balance sheet than IFRS 4 (Figure 3), specifically on the breakdown of insurance contract liabilities: PVFCF, RA, and CSM. This allows for improved analysis of the insurer’s products and their business performance.

On the statement of comprehensive income, IFRS 17 has removed Premiums and replaced Change in Insurance Contract Liabilities with the new components introduced in the balance sheet – PVFCF, RA and CSM. Now, the first items listed present the insurance components that make up Insurance Service Result. This is followed by Investment Income and Insurance Finance Expenses, which together determine the Net Financial Result. With a clear distinction of the different sources of profit, this framework allows for better comparability among industries.

Conclusion

In summary, IFRS 17 is the accounting Standard that introduces a consistent model for measuring liabilities for all insurance contracts. It also increases the transparency of the source of insurance-related earnings by separating insurance services from investment returns, which provides global comparability for the first time in the insurance industry.

[1] Appendix B – Illustrations, IFRS 17 Effects Analysis by the IFRS Foundation (page 118).

[2] Preview of IFRS 17 Insurance Contracts, National Standard-Setters webinar by the IFRS Foundation (Slide 9).

[3] Preview of IFRS 17 Insurance Contracts, National Standard-Setters webinar by the IFRS Foundation (Slide 11).

Carmen Loh is a Risk Consultant with FRG. She graduated with her Actuarial Science degree in 2016 from Heriot-Watt University before joining FRG in the following fall. She is currently the subject matter expert on an IFRS 17 implementation project for a general insurance company in the APAC region.

RELATED:

The 5 Ws and H of IFRS 17 (Part 1)

 

The 5 Ws and H of IFRS 17 (Part 1)

International Financial Reporting Standards (IFRS) 17 Insurance Contracts, issued in 2017, represents a major overhaul on financial reporting for insurance companies. However, many in the financial industry are still unfamiliar with the Standards. This blog post, Part 1, aims to answer the five basic W questions Who, What, When, Where, and Why of IFRS 17. The H, or How, will be discussed in Part 2.

Who issued IFRS 17?

IFRS 17 is issued by the International Accounting Standards Board (IASB). The IASB specifies how companies must maintain and report their accounts.

What is IFRS 17?

IFRS 17 is the accounting Standard for insurance contracts. The IFRS are designed to bring consistency, transparency, and comparability within financial statements across various global industries, and IFRS 17 applies this approach to the insurance business.

When is the effective date?

Initially set for January 1, 2021, industry leaders have requested to delay the effective date due to the amount of effort required to implement the new Standard alongside IFRS 9. Additionally, in March 2020 the COVID-19 pandemic influenced the IASB to defer the final effective date for IFRS 17 to January 1, 2023.

Where does IFRS 17 apply?

IFRS 17 applies to all insurance companies using the IFRS Standards. Currently, it has been estimated that 450 insurance companies worldwide will be affected. Insurance companies in Japan and the United States, however, use Generally Accepted Accounting Principles (GAAP)—a rules-based approach that is rigorous compared to the principle-based approach of IFRS. Therefore, IFRS 17 does not directly impact Japan or the U.S., but it could affect related multinational companies with insurance business overseas.

Why was IFRS 17 developed?

This section discusses some of the main issues with the current reporting standards (IFRS 4) which has led to the issuance of IFRS 17. They include inconsistent accounting, little transparency, and lack of comparability (see Figure 1).

Figure 1: Some of the main issues with IFRS 4.[1]

Inconsistent Accounting

IFRS 17 was developed to replace IFRS 4, which was an interim Standard meant to limit changes to existing insurance accounting practices. As IFRS 4 did not provide detailed guidelines, there were many questions left unanswered about the expectations for insurers:

  • Are they required to discount their cash flows?
  • What discount rates should they use?
  • Do they amortize the incurred costs or expense them immediately?
  • Are they required to consider the time value of money when measuring the liabilities?

Hence, insurers came up with different practices to measure their insurance products.

Little Transparency

Analyzing financial statements has been difficult as some insurers do not provide complete information about the sources of profit recognized from insurance contracts. For example, some companies immediately recognize premiums received as revenue. There are also companies that do not separate the investment income from investment-linked contracts when measuring insurance contract liabilities. As a result, regulators cannot determine if the company is generating profit by providing insurance services or by benefiting from good investments.

Lack of Comparability

Some multinational companies consolidate their subsidiaries using different accounting policies, even for the same type of insurance contracts written in different countries. This makes it challenging for investors to compare the financial statements across different industries to evaluate investments.

How does IFRS 17 replace IFRS 4?

IFRS 17 introduces a standard model to measure insurance contract liabilities, changes the way insurers recognize profit (Insurance Revenue), and revamps the presentation of financial statements (see Figure 2). We will dive into these topics in Part 2 of this blog series.

Figure 2: A comparison of IFRS 4 and IFRS 17.[2]

 

[1] Appendix B – Illustrations, IFRS 17 Effects Analysis by the IFRS Foundation (page 118).

[2] Appendix B – Illustrations, IFRS 17 Effects Analysis by the IFRS Foundation (page 118).

 

Carmen Loh is a Risk Consultant with FRG. She graduated with her Actuarial Science degree in 2016 from Heriot-Watt University before joining FRG. She is currently the subject matter expert on an IFRS 17 implementation project for a general insurance company in the APAC region.

Economic Impact Analysis for Credit Unions

In a recent webinar I participated in with SAS we discussed Economic Impact Analysis (EIA). While EIA is similar in concept to stress testing, its main goal is to allow credit unions to move quickly to evaluate economic changes to their portfolio—such as those brought about by a crisis like the COVID-19 pandemic.

There are four main components to EIA.

  1. Portfolio data: At a minimum this needs to be segment level with loss performance through time. If needed, this data could be augmented with external data
  2. Scenarios: Multiple economic possibilities are necessary to help assess timing and magnitude of potential, future loss
  3. Models or methodologies: These are required to link scenarios to the portfolio to forecast loss amounts
  4. Visualization of results: This is essential to clearly understand the portfolio loss behavior. While tables are useful, nothing illustrates odd behavior better than a picture (e.g., a box plot or tree map or histogram).

A credit union looking for a practical approach for getting started should consider the following steps:

  • Start with segment level data instead of account level. This should reduce the common complexities that arise when sourcing and cleaning account level data.
  • Develop segment level models or methodologies to capture the impacts of macroeconomic changes.  These can be simple provided they incorporate relationships to macroeconomic elements.
  • Create multiple scenarios. The more the better. Different scenarios will provide different insights in how the portfolio reacts to changing macroeconomic environments.
  • Execute models and explore results. This is where (I believe) the fun begins. Be curious – change portfolio assumptions (e.g., growth or run-off), and scenarios, to see how losses will react.

Now is the time to act, to gain an understanding about the economy’s impact on one’s portfolio. But it is worth mentioning this is also an investment into the future. As mentioned earlier, EIA has its roots in stress testing. By creating an EIA process now, a credit union not only better positions itself to build a robust stress test platform but also has the foundation to tackle CECL.

To view the webinar on demand, please visit NAFCU.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, and technical communication. He has over 15 years’ experience in the area of financial risk.

CECL Preparation: Handling Missing Data for CECL Requirements

Most financial institutions (FI’s) find that data is the biggest hurdle when it comes to regulatory requirements: they don’t have enough information, they have the wrong information, or they simply have missing information. With the CECL accounting standard, the range of data required to estimate expected credit losses (e.g., reasonable and supportable forecasts) grew from what was previously required. While this is a good thing in the long run (as the requirements gradually help FI’s build up their inventory of clean, model-ready data), many FI’s are finding it difficult to address data problems right now. In particular, how to handle missing data is a big concern.

Missing data becomes a larger issue because not all missing data is the same. Classifications, based on the root causes of the missing data, are used as guidance in choosing the appropriate method for data replacement. The classifications consist of:

  1. Not missing at random – the cause of the missing data is related to the missing values
    • For example, CLTV values are missing when previous values have exceeded 100.
  2. Missing at random (MAR) – the cause of the missing data is related to observed values of other variables
    • For example, DTI values are missing when the number of borrowers is 2 or more.
  3. Missing completely at random (MCAR) – the cause of the missing data is unrelated to values of the variable or other variables; data is missing due to an entirely random process
    • For example, LTV values are missing because a system outage caused recently loaded data to be reset to default value of missing.

Once a classification is made for the reason of missing data, it is easier to determine its resolution. For example, if the data is MCAR there is no pattern and therefore, involves no loss of information if those observations with the missing values are dropped. Unfortunately, data is rarely MCAR.

The following table represents some methods (not meant to be all inclusive) a FI may use to handle other, more common, data issues.

MethodDescriptionProsCons
Last observation carried forward / backwardFor a given account, use a non-missing value in that variable to fill missing values before and/or after it• Simple
• Uses actual value that the account has
• Useful for origination variables
• Assumes stability in account behavior
• Assumes data is MCAR

Mean ImputationUser of the verage value of the observed observations for the missing value• Simple• Distorts empirical distribution of data
• Does not use all information in the data set
Hot decking and cold deckingReplace missing values with a value from a similar observation in the sample (cold decking is when one uses a similar observation out of sample)• Conceptually straightforward
• Uses existing relationships in the data
• Can be difficult to define characteristics of a similar observation
• Continuous data can be problematic
• Assumes data is MAR
RegressionUse univariate or multivariate regression models to impute missing value – dependent variable is the variable that is missing• Fairly easy to implement
• Uses existing relationships in the data
• Can lead to overstating relationships among the variables
• Estimated values may fall out of accepted ranges
• Assumes data is MAR

Understanding why the data is missing is an important first step in resolving the issue. Using the imputation methods outlined above can provide a temporary solution in creating clean historical data for methodology development. However, in the long run, FI’s will benefit from establishing a more permanent solution by constructing data standards/procedures and implementing a robust on-going monitoring process to ensure the data is accurate, clean, and consistent.

 

Resources:

  1. FASB Accounting Standards Update, No. 2016-13, Financial Instruments – Credit Losses (Topic 326).

Samantha Zerger, business analytics consultant with FRG, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

CECL Preparation: How Embracing SR 11-7 Guidelines Can Support the CECL Process

The Board of Governors of the Federal Reserve System’s SR 11-7 supervisory guidance (2011) provides an effective model risk management framework for financial institutions (FI’s). SR 11-7 covers everything from the definition of a model to the robust policies/procedures that should exist within a model risk management framework. To reduce model risk, any FI should consider following the guidance throughout internal and regulatory processes as its guidelines are comprehensive and reflect a banking industry standard.

The following items and quotations represent an overview of the SR 11-7 guidelines (Board of Governors of the Federal Reserve System, 2011):

  1. The definition of a model – “the term model refers to a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”
  2. A focus on the purpose/use of a model – “even a fundamentally sound model producing accurate outputs consistent with the design objective of the model may exhibit high model risk if it is misapplied or misused.”
  3. The three elements of model risk management:
    • Robust model development, implementation, and use – “the design, theory, logic underlying the model should be well documented and generally supported by published research and sound industry practice.”
    • Sound model validation process – “an effective validation framework should include three core elements: evaluation of conceptual soundness, …, ongoing monitoring, …, and benchmarking, outcomes analysis, …”
    • Governance – “a strong governance framework provides explicit support and structure to risk management functions through policies defining relevant risk management activities, procedures that implement those policies, allocation of resources, and mechanisms for evaluating whether policies and procedures are being carried out as specified.”

The majority of what the SR 11-7 guidelines discuss applies to some of the new aspects from the accounting standard CECL (FASB, 2016). Any FI under CECL regulation must provide explanations, justifications, and rationales for the entirety of the CECL process including (but not limited to) model development, validation, and governance. The SR 11-7 guidelines will help FI’s develop effective CECL processes in order to limit model risk.

Some considerations from the SR 11-7 guidelines in regards to the components of CECL include (but are not limited to):

  • Determining appropriateness of data and models for CECL purposes. Existing processes may need to be modified due to some differing CECL requirements (e.g., life of loan loss estimation).
  • Completing comprehensive documentation and testing of model development processes. Existing documentation may need to be updated to comply with CECL (e.g., new models or implementation processes).
  • Accounting for model uncertainty and inaccuracy through the understanding of potential limitations/assumptions. Existing model documentation may need to be re-evaluated to determine if new limitations/assumptions exist under CECL.
  • Ensuring validation independence from model development. Existing validation groups may need to be further separated from model development (e.g., external validators).
  • Developing a strong governance framework specifically for CECL purposes. Existing policies/procedures may need to be modified to ensure CECL processes are being covered.

The SR 11-7 guidelines can provide FI’s with the information they need to start their CECL process. Although not mandated, following these guidelines overall is important in reducing model risk and in establishing standards that all teams within and across FI’s can follow and can regard as a true industry standard.

Resources:

  1. Board of Governors of the Federal Reserve System. “SR 11-7 Guidance on Model Risk Management”. April 4, 2011.
  2. Daniel Brown and Dr. Craig Peters. “New Impairment Model: Governance Considerations”. Moody’s Analytics Risk Perspectives. The Convergence of Risk, Finance, and Accounting: CECL. Volume VIII. November 2016.
  3. Financial Accounting Standards Board (FASB). Financial Instruments – Credit Losses (Topic 326). No. 2016-13. June 2016.

Samantha Zerger, business analytics consultant with FRG, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

CECL Preparation: Documenting CECL

The CECL Standard requires more than just another update in the calculation of a financial institution’s (FI’s) allowance for credit losses; the new standard also pushes institutions to be more involved in the entire allowance process, especially on the management/executive level. From explanations, justifications and rationales to policies and procedures, the standard requires them all. The FI needs to discuss them, understand them, and document them.

The first point is to discuss all decisions that must be made regarding the CECL process. This includes everything from the definition of default to the justification of which methodology to use for which segment of the data. Although these discussions may be onerous, the CECL standard requires full understanding and completeness of all decisions. Once there is understanding, all decisions must be documented for regulation purposes:

CECL Topic 326-20-50-10: An entity shall provide information that enables a financial statement user to do the following:

  1. Understand management’s method for developing its allowance for credit losses.
  2. Understand the information that management used in developing its current estimate of expected credit losses.
  3. Understand the circumstances that caused changes to the allowance for credit losses, thereby affecting the related credit loss expense (or reversal) reported for the period.

CECL Topic 326-20-50-11: To meet the objectives in paragraph 326-20-50-10, an entity shall disclose all of the following by portfolio segment and major security type:

  1. A description of how expected loss estimates are developed
  2. A description of the entity’s accounting policies and methodology to estimate the allowance for credit losses, as well as discussion of the factors that influenced management’s current estimate of expected credit losses, including:
    • Past events
    • Current conditions
    • Reasonable and supportable forecasts about the future
  3. A discussion of risk characteristics relevant to each portfolio segment
  4. Etc.

Although these may seem like surprising jumps in requirements for CECL, these are simply more defined requirements than under existing ALLL guidance. Note that some of the general requirements under the existing guidance will remain relevant under CECL, such as:

  • “the need for institutions to appropriately support and document their allowance estimates”
  • the “…responsibility for developing, maintaining, and documenting a comprehensive, systematic, and consistently applied process for determining the amounts of the ACL and the provision for credit losses.”
  • the requirement “…that allowances be well documented, with clear explanations of the supporting analyses and rationale.”

As you can see, documentation is an important component of the CECL standard. While the documentation will, at least initially, require more effort to produce, it will also give the FI opportunity to fully understand the inner workings of their CECL process.

Lastly, advice to avoid some headache—take the time to document throughout the entire process of CECL. As my math professor always said, “the due date is not the do date.”

Resources:

  1. FASB Accounting Standards Update, No. 2016-13, Financial Instruments – Credit Losses (Topic 326).
  2. Frequently Asked Questions on the New Accounting Standard on Financial Instruments – Credit Losses. FIL-20-2019. April 3, 2019.

Samantha Zerger, business analytics consultant with FRG, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

CECL Preparation: The Power of Vintage Analysis

I would argue that a critical step in getting ready for CECL is to review the vintage curves of the segments that have been identified. Not only do the resulting graphs provide useful information but the process itself also requires thought on how to prepare the data.

Consider the following graph of auto loan losses for different vintages of Not-A-Real-Bank bank[1]:

Chart that shows auto loan losses for a hypothetical bank.

While this is a highly-stylized depiction of vintage curves, its intent is to illustrate what information can be gleaned from such a graph. Consider the following:

  1. A clear end to the seasoning period can be determined (period 8)
  2. Outlier vintages can be identified (2015Q4)
  3. Visual confirmation that segmentation captures risk profiles (there aren’t a substantial number of vintages acting odd)

But that’s not all! To get to this graph, some important questions need to be asked about the data. For example:

  1. Should prepayment behavior be captured when deriving the loss rates? If so, what’s the definition of prepayment?
  2. At what time period should the accumulation of losses be stopped (e.g., contractual term)?
  3. Is there enough loss[2] behavior to model on the loan level?
  4. How should accounts that renew be treated (e.g., put in new vintage)?

In conclusion, performing vintage analysis is more than just creating a picture with many different colors. It provides insight into the segments, makes one consider the data, and, if the data is appropriately constructed, positions one for subsequent analysis and/or modeling.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] Originally I called this bank ACME Bank but when I searched to see if one existed I got this, this, and this…so I changed the name. I then did a search of the new name and promptly fell into a search engine rabbit hole that, after a while, I climbed out with the realization that for any 1 or 2 word combination I come up with, someone else has already done the same and then added bank to the end.

[2] You can also build vintage curves on defaults or prepayment.

 

RELATED:

CECL—Questions to Consider When Selecting Loss Methodologies

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

CECLData (As Usual) Drives Everything

CECL Preparation: Questions to Consider When Selecting Loss Methodologies

Paragraph 326-20-30-3 of the Financial Accounting Standards Board (FASB) standards update[1] states: “The allowance for credit losses may be determined using various methods”. I’m not sure if any statement, other than “We need to talk”, can be as fear inducing. Why is it scary? Because in the world of details and accuracy, this statement is remarkably vague and not prescriptive.

Below are some questions to consider when determining the appropriate loss methodology approaches for a given segment.

How much history do you have?

If a financial institution (FI) has limited history[2] then the options available to them are, well, limited. To build a model one needs sufficient data to capture the behavior (e.g., performance or payment) of accounts. Without enough data the probability of successfully building a model is low. Worse yet, even if one builds a model, the likelihood of it being useful and robust is minimal. As a result, loss methodology approaches that do not need a lot of data should be considered (e.g., discount cashflow or a qualitative factor approach based on industry information).

Have relevant business definitions been created?

The loss component approach (decomposing loss into PD, LGD, and EAD) is considered a leading practice at banks[3]. However, in order to use this approach definitions of default and, arguably, paid-in-full, need to be created for each segment being modeled. (Note: these definitions can be the same or different across segments.) Without these definitions, one does not know when an account has defaulted or paid-off.

Is there a sufficient number of losses or defaults in the data?

Many of the loss methodologies available for consideration (e.g., loss component or vintage loss rates) require enough losses to discern a pattern. As a result, banks that are blessed with infrequent losses can feel cursed when they try to implement one of those approaches. While low losses do not necessarily rule out these approaches, it does make for a more challenging process.

Are loan level attributes available, accurate, and updated appropriately?

This question tackles the granularity of an approach instead of an approach itself. As mentioned in the post CECL – Data (As Usual) Drives Everything, there are three different data granularity levels a model can be built on. Typically, the decision is between loan-level versus segment level. Loan-level models are great for capturing sensitivities to loan characteristics and macroeconomic events provided the loan characteristics are accurate and updated (if needed) on a regular interval.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1]FASB accounting standards update can be found here

[2] There is no consistent rule, at least that I’m aware of, that defines “limited history”. That said, we typically look for clean data reaching back through an economic cycle.

[3] See: Capital Planning at Large Bank Holding Companies: Supervisory Expectations and Range of Current Practice August 2013

RELATED:

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

CECLData (As Usual) Drives Everything

CECL Preparation: The Caterpillar to Butterfly Evolution of Data for Model Development

I don’t know about you, but I find caterpillars to be a bit creepy[1]. On the other hand, I find butterflies to be beautiful[2]. Oddly enough, this aligns to my views on the different stages of data in relation to model development.

As a financial institution (FI) prepares for CECL, it is strongly suggested (by me at least) to know which stage the data falls into. Knowing its stage provides one with guidance on how to proceed.

The Ugly

At FRG we use the term dirty data to describe data that is ugly. Dirty data typically has these following characteristics (the list is not comprehensive):

  • Unexplainable missing values: The key word is unexplainable. Missing values can mean something (e.g., a value has not been captured yet) but often they indicate a problem. See this article for more information.
  • Inconsistent values: For example, a character variable that holds values for state might have Missouri, MO, or MO. as values. A numeric variable for interest rate might have a value as a percent (7.5) and a decimal (0.075)
  • Poor definitional consistency: This occurs when a rule that is used to classify some attribute of an account changes during history. For example, at one point in history a line of credit might be indicated by a nonzero original commitment amount, but at a different point it might be indicated by whether a revolving flag is non-missing.
The Transition

You should not model or perform analysis using dirty data. Therefore, the next step in the process is to transition dirty data into clean data.

Transitioning to clean data, as the name implies, requires scrubbing the information. The main purpose of this step is to address the issues identified in the dirty data. That is, one would want to fix missing values (e.g., imputation), standardized variable values (e.g., all states are identified by a two-character code), and correct inconsistent definitions (e.g., a line indicator is always based on nonzero original commitment amount).

The Beautiful

A final step must be taken before data can be used for modeling. This step takes clean data and converts it to model-ready data.

At FRG we use the term model-ready to describe clean data with the application of relevant business definitions. An example of a relevant business definition would be how an FI defines default[3]. Once the definition has been created the corresponding logic needs to be applied to the clean data in order to create, say, a default indicator variable.

Just like a caterpillar metamorphosing to a butterfly, dirty data needs to morph to model-ready for an FI to enjoy its true beauty. And, only then, can an FI move forward on model development.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] Yikes!

[2] Pretty!

[3] E.g., is it 90+ days past due (DPD) or 90+ DPD or in bankruptcy or in non-accrual or …?

 

RELATED:

CECL—Questions to Consider When Selecting Loss Methodologies

CECLData (As Usual) Drives Everything

CECL Preparation: Data (As Usual) Drives Everything

To appropriately prepare for CECL a financial institution (FI) must have a hard heart-to-heart with itself about its data. Almost always, simply collecting data in a worksheet, reviewing it for gaps, and then giving it the thumbs up is insufficient.

Data drives all parts of the CECL process. The sections below, by no means exhaustive, provide key areas where your data, simply being by your data, constrains your options.

Segmentation

Paragraph 326-20-30-2 of the Financial Accounting Standards Board (FASB) standards update[1] states: “An entity shall measure expected credit losses of financial assets on a collective (pool) basis when similar risk characteristic(s) exist.” It then points to paragraph 326-20-55-5 which provides examples of risk characteristics, some of which are: risk rating, financial asset type, and geographical location.

Suggestion: prior to reviewing your data consider what risk profiles are in your portfolio. After that, review your data to see if it can adequately capture those risk profiles. As part of that process consider reviewing:

  • Frequency of missing values in important variables
  • Consistency in values of variables
  • Definitional consistency[2]
Methodology Selection

The FASB standard update does not provide guidance as to which methodologies to use[3]. That decision is entirely up to the FI[4]. However, the methodologies that are available to the FI are limited by the data it has. For example, if an FI has limited history then any of the methodologies that are rooted in historical behavior (e.g., vintage analysis or loss component) are likely out of the question.

Suggestion: review the historical data and ask yourself these questions: 1) do I have sufficient data to capture the behavior for a given risk profile?; 2) is my historical data of good quality?; 3) are there gaps in my history?

Granularity of Model

Expected credit loss can be determined on three different levels of granularity: loan, segment (i.e., risk profile), and portfolio. Each granularity level has a set of pros and cons but which level an FI can use depends on the data.

Suggestion: review variables that are account specific (e.g., loan-to-value, credit score, number of accounts with institution) and ask yourself: are the sources of these variables reliable? Do they get refreshed often enough to capture changes in customer or macroeconomic environment behavior?

Hopefully, this post has started you critically thinking about your data. While data review might seem daunting, I cannot stress enough—it’s needed, it’s critical, it’s worth the effort.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] You can find the update here

[2] More on what these mean in a future blog post

[3] Paragraph 326-20-30-3

[4] A future blog post will cover some questions to ask to guide in this decision.

 

RELATED:

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

Subscribe to our blog!