CECL – The Power of Vintage Analysis

I would argue that a critical step in getting ready for CECL is to review the vintage curves of the segments that have been identified. Not only do the resulting graphs provide useful information but the process itself also requires thought on how to prepare the data.

Consider the following graph of auto loan losses for different vintages of Not-A-Real-Bank bank[1]:

 

While this is a highly-stylized depiction of vintage curves, its intent is to illustrate what information can be gleaned from such a graph. Consider the following:

  1. A clear end to the seasoning period can be determined (period 8)
  2. Outlier vintages can be identified (2015Q4)
  3. Visual confirmation that segmentation captures risk profiles (there aren’t a substantial number of vintages acting odd)

But that’s not all! To get to this graph, some important questions need to be asked about the data. For example:

  1. Should prepayment behavior be captured when deriving the loss rates? If so, what’s the definition of prepayment?
  2. At what time period should the accumulation of losses be stopped (e.g., contractual term)?
  3. Is there enough loss[2] behavior to model on the loan level?
  4. How should accounts that renew be treated (e.g., put in new vintage)?

In conclusion, performing vintage analysis is more than just creating a picture with many different colors. It provides insight into the segments, makes one consider the data, and, if the data is appropriately constructed, positions one for subsequent analysis and/or modeling.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] Originally I called this bank ACME Bank but when I searched to see if one existed I got this, this, and this…so I changed the name. I then did a search of the new name and promptly fell into a search engine rabbit hole that, after a while, I climbed out with the realization that for any 1 or 2 word combination I come up with, someone else has already done the same and then added bank to the end.

[2] You can also build vintage curves on defaults or prepayment.

 

RELATED:

CECL—Questions to Consider When Selecting Loss Methodologies

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

CECLData (As Usual) Drives Everything

CECL—Questions to Consider When Selecting Loss Methodologies

Paragraph 326-20-30-3 of the Financial Accounting Standards Board (FASB) standards update[1] states: “The allowance for credit losses may be determined using various methods”. I’m not sure if any statement, other than “We need to talk”, can be as fear inducing. Why is it scary? Because in the world of details and accuracy, this statement is remarkably vague and not prescriptive.

Below are some questions to consider when determining the appropriate loss methodology approaches for a given segment.

How much history do you have?

If a financial institution (FI) has limited history[2] then the options available to them are, well, limited. To build a model one needs sufficient data to capture the behavior (e.g., performance or payment) of accounts. Without enough data the probability of successfully building a model is low. Worse yet, even if one builds a model, the likelihood of it being useful and robust is minimal. As a result, loss methodology approaches that do not need a lot of data should be considered (e.g., discount cashflow or a qualitative factor approach based on industry information).

Have relevant business definitions been created?

The loss component approach (decomposing loss into PD, LGD, and EAD) is considered a leading practice at banks[3]. However, in order to use this approach definitions of default and, arguably, paid-in-full, need to be created for each segment being modeled. (Note: these definitions can be the same or different across segments.) Without these definitions, one does not know when an account has defaulted or paid-off.

Is there a sufficient number of losses or defaults in the data?

Many of the loss methodologies available for consideration (e.g., loss component or vintage loss rates) require enough losses to discern a pattern. As a result, banks that are blessed with infrequent losses can feel cursed when they try to implement one of those approaches. While low losses do not necessarily rule out these approaches, it does make for a more challenging process.

Are loan level attributes available, accurate, and updated appropriately?

This question tackles the granularity of an approach instead of an approach itself. As mentioned in the post CECL – Data (As Usual) Drives Everything, there are three different data granularity levels a model can be built on. Typically, the decision is between loan-level versus segment level. Loan-level models are great for capturing sensitivities to loan characteristics and macroeconomic events provided the loan characteristics are accurate and updated (if needed) on a regular interval.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1]FASB accounting standards update can be found here

[2] There is no consistent rule, at least that I’m aware of, that defines “limited history”. That said, we typically look for clean data reaching back through an economic cycle.

[3] See: Capital Planning at Large Bank Holding Companies: Supervisory Expectations and Range of Current Practice August 2013

RELATED:

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

CECLData (As Usual) Drives Everything

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

I don’t know about you, but I find caterpillars to be a bit creepy[1]. On the other hand, I find butterflies to be beautiful[2]. Oddly enough, this aligns to my views on the different stages of data in relation to model development.

As a financial institution (FI) prepares for CECL, it is strongly suggested (by me at least) to know which stage the data falls into. Knowing its stage provides one with guidance on how to proceed.

The Ugly

At FRG we use the term dirty data to describe data that is ugly. Dirty data typically has these following characteristics (the list is not comprehensive):

  • Unexplainable missing values: The key word is unexplainable. Missing values can mean something (e.g., a value has not been captured yet) but often they indicate a problem. See this article for more information.
  • Inconsistent values: For example, a character variable that holds values for state might have Missouri, MO, or MO. as values. A numeric variable for interest rate might have a value as a percent (7.5) and a decimal (0.075)
  • Poor definitional consistency: This occurs when a rule that is used to classify some attribute of an account changes during history. For example, at one point in history a line of credit might be indicated by a nonzero original commitment amount, but at a different point it might be indicated by whether a revolving flag is non-missing.
The Transition

You should not model or perform analysis using dirty data. Therefore, the next step in the process is to transition dirty data into clean data.

Transitioning to clean data, as the name implies, requires scrubbing the information. The main purpose of this step is to address the issues identified in the dirty data. That is, one would want to fix missing values (e.g., imputation), standardized variable values (e.g., all states are identified by a two-character code), and correct inconsistent definitions (e.g., a line indicator is always based on nonzero original commitment amount).

The Beautiful

A final step must be taken before data can be used for modeling. This step takes clean data and converts it to model-ready data.

At FRG we use the term model-ready to describe clean data with the application of relevant business definitions. An example of a relevant business definition would be how an FI defines default[3]. Once the definition has been created the corresponding logic needs to be applied to the clean data in order to create, say, a default indicator variable.

Just like a caterpillar metamorphosing to a butterfly, dirty data needs to morph to model-ready for an FI to enjoy its true beauty. And, only then, can an FI move forward on model development.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] Yikes!

[2] Pretty!

[3] E.g., is it 90+ days past due (DPD) or 90+ DPD or in bankruptcy or in non-accrual or …?

 

RELATED:

CECL—Questions to Consider When Selecting Loss Methodologies

CECLData (As Usual) Drives Everything

The Importance of Technical Communication

This is the introduction to a new blog series, The Importance of Technical Communication, which will focus on topics such as verbal and written communication, workplace etiquette, and teamwork in the workplace.

Soft skills, as a general term, include interpersonal skills, leadership, dependability, willingness to learn, and effective communication skills that can be used in any career. These are known by sociologists and anthropologists as skills that are generally required to become a functioning member of society. But, it seems that there are many articles pointing out a lack of these soft skills among college graduates and stating it as a main reason why many cannot get hired. Some headlines include:

Results from a survey by the Workforce Solutions Group at St. Louis Community College regard these deficiencies specifically as applicant shortcomings. In the St. Louis regional survey, it states that poor work habits, lack of critical thinking and problem solving skills, lack of teamwork or collaboration, and lack of communication or interpersonal skills rank the highest in applicant shortcomings within both technology and finance domains.

 TechnologyFinance
Poor work habits56%50%
Lack of critical thinking and problem solving skills44%50%
Lack of teamwork or collaboration49%43%
Lack of communication or interpersonal skills58%38%
Table 1: Applicant Shortcomings – 2018 State of St. Louis Workforce Report to the Region

In today’s society, with tools at our fingertips, communication is key. In the workplace, interpersonal skills are needed at a rapid, daily pace. Often other workplace issues, such as lack of collaboration skills, arise from communication issues. Given these alarming statistics, how do we, in the technology and finance domain, encourage the improvement of these skills within our companies and deal with applicants who lack them? This blog series will discuss these questions and provide tips on how to correctly technically communicate in the workplace.

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

CECL – Data (As Usual) Drives Everything

To appropriately prepare for CECL a financial institution (FI) must have a hard heart-to-heart with itself about its data. Almost always, simply collecting data in a worksheet, reviewing it for gaps, and then giving it the thumbs up is insufficient.

Data drives all parts of the CECL process. The sections below, by no means exhaustive, provide key areas where your data, simply being by your data, constrains your options.

Segmentation

Paragraph 326-20-30-2 of the Financial Accounting Standards Board (FASB) standards update[1] states: “An entity shall measure expected credit losses of financial assets on a collective (pool) basis when similar risk characteristic(s) exist.” It then points to paragraph 326-20-55-5 which provides examples of risk characteristics, some of which are: risk rating, financial asset type, and geographical location.

Suggestion: prior to reviewing your data consider what risk profiles are in your portfolio. After that, review your data to see if it can adequately capture those risk profiles. As part of that process consider reviewing:

  • Frequency of missing values in important variables
  • Consistency in values of variables
  • Definitional consistency[2]
Methodology Selection

The FASB standard update does not provide guidance as to which methodologies to use[3]. That decision is entirely up to the FI[4]. However, the methodologies that are available to the FI are limited by the data it has. For example, if an FI has limited history then any of the methodologies that are rooted in historical behavior (e.g., vintage analysis or loss component) are likely out of the question.

Suggestion: review the historical data and ask yourself these questions: 1) do I have sufficient data to capture the behavior for a given risk profile?; 2) is my historical data of good quality?; 3) are there gaps in my history?

Granularity of Model

Expected credit loss can be determined on three different levels of granularity: loan, segment (i.e., risk profile), and portfolio. Each granularity level has a set of pros and cons but which level an FI can use depends on the data.

Suggestion: review variables that are account specific (e.g., loan-to-value, credit score, number of accounts with institution) and ask yourself: are the sources of these variables reliable? Do they get refreshed often enough to capture changes in customer or macroeconomic environment behavior?

Hopefully, this post has started you critically thinking about your data. While data review might seem daunting, I cannot stress enough—it’s needed, it’s critical, it’s worth the effort.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] You can find the update here

[2] More on what these mean in a future blog post

[3] Paragraph 326-20-30-3

[4] A future blog post will cover some questions to ask to guide in this decision.

 

RELATED:

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

Avoiding Discrimination in Unstructured Data

An article published by the Wall Street Journal on Jan. 30, 2019  got me thinking about the challenges of using unstructured data in modeling. The article discusses how New York’s Department of Financial Services is allowing life insurers to use social media, as well as other nontraditional sources, to set premium rates. The crux: the data cannot unfairly discriminate.  

I finished the article with three questions on my mind. The first: How does a company convert unstructured data into something useful? The article mentions that insurers are leveraging public information – like motor vehicle records and bankruptcy documents – in addition to social media. Surely, though, this information is not in a structured format to facilitate querying and model builds.

Second: How does a company ensure the data is good quality? Quality here doesn’t only mean the data is clean and useful, it also means the data is complete and unbiased. A lot of effort will be required to take this information and make it model ready. Otherwise, the models will at best provide spurious output and at worst provide biased output.

The third: With all this data available what “new” modeling techniques can be leveraged? I suspect many people read that last sentence and thought AI. That is one option. However, the key is to make sure the model does not unfairly discriminate. Using a powerful machine learning algorithm right from the start might not be the best option. Just ask Amazon about its AI recruiting tool.[1]

The answers to these questions are not simple, and they do require a blend of technological aptitude and machine learning sophistication. Stay tuned for future blog posts as we provide answers to these questions.

 

[1] Amazon scraps secret AI recruiting tool that showed bias against women

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

Does the Liquidity Risk Premium Still Exist in Private Equity?

FRG has recently been investigating the dynamics of the private capital markets.  Our work has led us to a ground-breaking product designed to help allocators evaluate potential cash flows, risks, and plan future commitments to private capital.  You can learn more here and read about our modeling efforts in our white paper, “Macroeconomic Effects On The Modeling of Private Capital Cash Flows.”

As mentioned in a previous post, we are investigating the effects of available liquidity in the private capital market.  This leads to an obvious question: Does the Liquidity Risk Premium Still Exist in Private Equity?

It is assumed by most in the space that the answer is “Yes.”  Excess returns provided by private funds are attributable to reduced liquidity.  Lock up periods of 10+ years allow managers to find investments that would not be possible otherwise.  This premium is HIGHLY attractive in a world of low rates and cyclically high public equity valuations.  Where else can a pension or endowment find the rates of return required?

If the answer is, “No,” then Houston, we have a problem.  Money continues to flow into PE at a high rate.  A recent article in the FT (quoting data from FRG partner Preqin) show there is nearly $1.5 trillion in dry powder.  Factoring in leverage, there could be, in excess of, $5 trillion in capital waiting to be deployed.  In the case of a “No” answer, return chasing could have gone too far, too fast.

As mentioned, leverage in private capital funds is large and maybe growing larger.  If the liquidity risk premium has been bid away, what investors are left with may very well be just leveraged market risk.  What is assumed to be high alpha/low beta, might, in fact, be low alpha/high beta.  This has massive implications for asset allocation.

We are attempting to get our heads around this problem in order to help our clients understand the risk associated with their portfolios.

 

Dominic Pazzula is a Director with the Financial Risk Group specializing in asset allocation and risk management.  He has more than 15 years of experience evaluating risk at a portfolio level and managing asset allocation funds.  He is responsible for product design of FRG’s asset allocation software offerings and consults with clients helping to apply the latest technologies to solve their risk, reporting, and allocation challenges.

 

 

 

 

 

Change in CECL Approved by the FDIC

The Federal Deposit Insurance Corporation (FDIC) approved a measure that will allow a three-year phase in of the impact of CECL on regulatory capital yesterday (12/18/18). This change will also delay the impact on bank stress tests until 2020.  The change does not affect the rule itself but now allows banks the option to phase in impacts of CECL on regulatory capital over a three-year period. The details of this change can be found in the FDIC memorandum released yesterday.  The memorandum also adjusts how reserves for “bad loans” will be accounted for in regulatory capital.

The Financial Risk Group is recommending that banks utilize this time to better understand the impact, and the opportunities, that result from the mandated changes. “Time to implementation has been a limiting factor for some institutions to explore the identification of additional stakeholder value, but this should no longer be the case,” stated John Bell, FRG’s managing partner. FRG has (and is currently) partnered with clients of all types on a number of assessments and implementations of CECL.  The lessons to date regarding CECL are available in a number of our publications, including: CECL-Considerations, Developments, and Opportunities and Current Expected Credit Loss-Why The Expectations Are Different.

Top 6 Things To Consider When Creating a Data Services Checklist

“Data! Data! Data! I can’t make bricks without clay.”
— Sherlock Holmes, in Arthur Conan Doyle’s The Adventure of the Copper Beeches

You should by now have a solid understanding of the growth of and history of data, data challenges and how to effectively manage themwhat data as a service (DaaS) is, how to optimize data using both internal and  external data sources, and the benefits of using DaaS. In our final post of the series, we will discuss the top six things to consider when creating a Data Services strategy.

Let’s break this down into two sections: 1) pre-requisites and 2) the checklist.

Prerequisites

We’ve identified four crucial points below to consider prior to starting your data services strategy. These will help frame and pull together the sections of information needed to build a comprehensive strategy to move your business towards success.

Prerequisites:

1: View data as a strategic business asset

 In the age of data regulation including BCBS 239 principles for effective risk data aggregation and risk reporting, GDPR and others, data, especially that relating to an individual, is considered an asset that must be managed and protected. It also can be aggregated, purchased, traded and legally shared to create bespoke user experiences and engage in more targeted business decisions. Data must be classified and managed with the appropriate level of governance in the same vein as other assets, such as people, processes and technology. Being in this mindset and appreciating the value of data and recognizing that not all data is alike and must be manged appropriately will ultimately ensure business success in a data-driven world.

2: Ensure executive buy-in, senior sponsorship and support

As with any project, having executive buy-in is required to ensure top down adoption. However, partnering with business line executives who create data and are power users of it can help champion its proper management and reuse in the organization. This assists in achieving goals and ensuring project and business success. The numbers don’t lie: business decisions should be driven by data.

3: Have a defined data strategy and target state that supports the business strategy

Having data for the sake of it won’t provide any value; rather, a clearly-defined data strategy and target state which outlines how data will support the business will allow for increased user buy in and support. This strategy must include and outline:

  • A Governance Model
  • An Organization chart with ownership, roles and responsibility, and operations; and
  • Goals for data accessibility and operations (or data maturity goals)

If these sections are not agreed from the start, uncertainty, overlapping responsibilities, duplication of data and efforts as well as regulatory or potentially legal issues may arise.

4: Have a Reference Data Architecture to Demonstrate where Data Services Fit

Understanding the architecture that supports data and data maturity goals, including the components that are required to support the management of data from acquisition through distribution and retirement is critical. It is also important to understand how they fit into the overall architecture and infrastructure of the technology at the firm.  Defining a clear data architecture and its components including:

  • Data model(s)
  • Acquisition
  • Access
  • Distribution
  • Storage
  • Taxonomy

are required prior to integration of the data.

5. Data Operating Model – Understanding how the Data Transverses the Organization

It is crucial to understand the data operations and operating model – including who does what to the data and how the data ownership changes over time or transfers among owners. Data lineage is key – where your data came from, its intended use, who has/is allowed to access it and where it goes inside or outside the organization – to keep it clean and optimize its use. Data quality tracking, metrics and remediation will be required.

Existing recognized standards such as the Global Legal Entity Identifier (LEI) that can be acquired and distributed via data services can help in the sharing and reuse of data that is ‘core’ to the firm. They can also assist in tying together data sets used across the firm.

Checklist/Things to Consider

Once you’ve finished the requirements gathering and understand the data landscape, including roles and responsibilities described above, you’re now ready to begin putting together your data services strategy. To build an all-encompassing strategy, the experts suggest inclusion of the following.

1:  Defined Data Services Required

  •  Classification: core vs. business shared data and ownership
    • Is everyone speaking a common language?
    • What data is ‘core’ to the business, meaning it will need to be commonly defined and used across the organization?
    • What data will be used by a specific business that may not need to be uniformly defined?
    • What business-specific data will be shared across the organization, which may need to be uniformly defined and might need more governance?
  • Internal vs external sourcing
    • Has the business collected or created the data themselves or has it been purchased from a 3rd party? Are definitions, metadata and business rules defined?
    • Has data been gathered or sourced appropriately and with the correct uniform definitions, rules, metadata and classification, such as LEI?
  • Authoritative Data Sources for the Data Services
    • Have you documented where, from whom, when etc. the data was gathered (from Sources of Record or Sources of Origin)? For example, the Source of Origin might be a trading system, an accounting system or a payments system. The general ledger might be the Source of Record for positions.
    • Who is the definitive source (internal/external)? Which system?
  • Data governance requirements
    • Have you adhered to the proper definitions, rules, and standards set in order to handle data?
    • Who should be allowed to access the data?
    • Which applications (critical, usually externally facing) applications must access the data directly?
  • Data operations and maintenance
    • Have you kept your data clean and up to date?
    • Are you up to speed with regulations, such as GDPR, and successfully obtained explicit consent for the information?
    • Following your organization chart and rules and requirements detailed above, are the data owners known, informed and understand they are responsible for making sure their data maintains its integrity?
    • Are data quality metrics monitored with a process to correct data issues?
    • Do all users with access to the data know who to speak to if there is a data quality issue and know how to fix it?
  • Data access, distribution and quality control requirements
    • Has the data been classified properly? Is it public information? If not, is it restricted to those who need it?
    • Have you defined how you share data between internal/external parties?
    • Have the appropriate rules and standards been applied to keep it clean?
    • Is there a clearly defined process for this?
  • Data integration requirements
    • If the data will be merged with other data sets/software, have the data quality requirements been met to ensure validity?
    • Have you prioritized the adoption of which applications must access the authoritative data distributed via data services directly?
    • Have you made adoption easy – allowing flexible forms of access to the same data (e.g., via spreadsheets, file transfers, direct APIs, etc.)?

2: Build or Acquire Data Services

 To recap, are you building or acquiring your own Data Services? Keep in mind the following must be met and adhere to compliance:

  • Data sourcing and classification, assigning ownership
  • Data Access and Integration
  • Proper Data Services Implementation, access to authoritative data
  • Proper data testing, and data remediation, keeping the data clean
  • Appropriate access control and distribution of the data, flexible access
  • Quality control monitoring
  • Data issue resolution process

The use and regulations around data will be constantly evolving as will the number of users data can support in business ventures. We hope that this checklist will provide a foundation towards building and supporting your organization’s data strategies. If there are any areas you’re unclear on, don’t forget to take a look back through our first five blogs which provide more in-depth overviews on the use of data services to support the business.

Thank you for tuning into our first blog series on data management. We hope that you found it informative but most importantly useful towards your business goals.

If you enjoyed our blog series or have questions on the topics discussed, write to us on Twitter@FRGRISK.

Dessa Glasser is a Principal with the Financial Risk Group, and an independent board member of Oppenheimer & Company, who assists Virtual Clarity, Ltd. on data solutions as an Associate. 

 

RELATED:

Data Is Big, Did You Know?

Data Management – The Challenges

Data as a Service (DaaS) Solution – Described

Data as a Service (DaaS) Data Sources – Internal or External?

Data as a Service (DaaS) – The Benefits

Is Your Business Getting The Full Bang for Its CECL Buck?

Accounting and regulatory changes often require resources and efforts above and beyond “business as usual”, especially those like CECL that are significant departures from previous methods. The efforts needed can be as complex as those for a completely new technology implementation and can take precedence over projects that are designed to improve your core business … and stakeholder value.

But with foresight and proper planning, you can prepare for a change like CECL by leveraging resources in a way that will maximize your efforts to meet these new requirements while also enhancing business value. At Financial Risk Group, we take this approach with each of our clients. The key is to start by asking “how can I use this new requirement to generate revenue and maximize business performance?”

 

The Biggest Bang Theory

In the case of CECL, there are two significant areas that will create the biggest institution-wide impact: analytics and data governance. While the importance of these is hardly new to financial institutions, we are finding that many neglect to leverage their CECL data and analytics efforts to create that additional value. Some basic first steps you can take include the following.

  • Ensure that the data utilized is accurate and that its access and maintenance align to the needs and policies of your business. In the case of CECL these will be employed to create scenarios, model, and forecast … elements that the business can leverage to address sales, finance, and operational challenges.
  • For CECL, analytics and data are leveraged in a much more comprehensive fashion than previous methods of credit assessment provided.  Objectively assess the current state of these areas to understand how the efforts being put toward CECL implementation can be leveraged to enhance your current business environment.
  • Identify existing available resources. While some firms will need to spend significant effort creating new processes and resources to address CECL, others will use this as an opportunity to retire and re-invent current workflows and platforms.

Recognizing the business value of analytics and data may be intuitive, but what is often less intuitive is knowing which resources earmarked for CECL can be leveraged to realize that broader business value. The techniques and approaches we have put forward provide good perspective on the assessment and augmentation of processes and controls, but how can these changes be quantified? Institutions without in-house experienced resources are well advised to consider an external partner. The ability to leverage expertise of staff experienced in the newest approaches and methodologies will allow your internal team to focus on its core responsibilities.

Our experience with this type of work has provided some very specific results that illustrate the short-term and longer-term value realized. The example below shows the magnitude of change and benefits experienced by one of our clients: a mid-sized North American bank. A thorough assessment of its unique environment led to a redesign of processes and risk controls. The significant changes implemented resulted in less complexity, more consistency, and increased automation. Additionally, value was created for business units beyond the risk department. While different environments will yield different results, those illustrated through the methodologies set forth here provide a good example to better judge the outcome of a process and controls assessment.

 

 Legacy EnvironmentAutomated Environment
Reporting OutputNo daily available manual controls for risk reportingDaily in-cycle reporting controls are automated with minimum manual interaction
Process SpeedCredit run 40+ hours
Manually-input variables prone to mistakes
Credit run 4 hours
Cycle time reduced from 3 days to 1 for variable creation
Controls & AuditMultiple audit issues and Regulatory MRAsAudit issues resolved and MRA closed
Model ExecutionSpreadsheet driven90 models automated resulting in 1,000 manual spreadsheets eliminated

 

While one approach will not fit all firms, providing clients with an experienced perspective on more fully utilizing their specific investment in CECL allows them to make decisions for the business that might otherwise never be considered, thereby optimizing the investment in CECL and truly ensuring you receive the full value from your CECL buck.

More information on how you can prepare for—and drive additional value through—your CECL preparation is available on our website and includes:

White Paper – CECL: Why the expectations are different

White Paper – CECL Scenarios: Considerations, Development and Opportunities

Blog – Data Management: The Challenges

Subscribe to our blog!