Stress Testing Private Equity


FRG, partnered with Preqin, has developed a system for simulating cash flows for private capital investments (PCF).  PCF allows the analyst to change assumptions about future economic scenarios and investigate the changes in the output cash flows.  This post will pick a Venture fund, shock the economy for a mild recession in the following quarters, and view the change in cash flow projections.

FRG develops scenarios for our clients.  Our most often used scenarios are the “Growth” or “Base” scenario, and the “Recession” scenario.  Both scenarios are based on the Federal Reserve’s CCAR scenarios “Base” and “Adverse”, published yearly and used for banking stress tests.

The “Growth” scenario (using the FED “Base” scenario) assumes economic growth more or less in line with recent experience.

The “Recession” scenario (FED “Adverse”) contains a mild recession starting late 2019, bottoming in Q2 2020.  GDP recovers back to its starting value in Q2 2021.  The recovery back to trend line (potential) GDP goes through Q2 2023.

Real GPD Growth chart

 

The economic drawdown is mild, the economy only loses 1.4% from the high.

Start DateTrough DateRecovery DateFull PotentialDepth
Q4 2019Q2 2020Q2 2021Q2 2023-1.4%

Equity market returns are a strong driver of performance in private capital.  The total equity market returns in the scenarios include a 34% drawdown in the index.  The market fully bottoms in Q1 2022, and has recovered to new highs by Q1 2023.

This draw down is shallow compared to previous history and the recovery period shorter:

Begin DateTrough DateRecovery DateDepthTotal LengthTrough Recovery
06/30/200009/30/200212/31/2006-47%271017
12/31/200703/31/200903/31/2013-49%22616
12/31/201903/31/202203/31/2024-34%18108

The .COM and Global Financial Crisis (GFC) recessions took off nearly 50% of the market value.  This recession only draws down 34%.  The time from the peak to the trough is 10 and 6 quarters for the .COM and GCF respectively.  Here we are inline with the .COM crash with a 10-quarter peak to trough period.  This recovery is faster by nearly double than either of the recent large drawdowns at 8 quarters versus 17 and 16.

We start by picking a 2016 vintage venture capital fund.  This fund has called around 89% of its committed capital, has an RVPI of 0.85 and currently sports about an 18% IRR.  For this exercise, we assume a $10,000,000 commitment.

Feeding the two scenarios, this fund, and a few other estimates into the PCF engine, we can see a dramatic shift in expected J-curve.

Under the “Growth” scenario, the fund’s payback date (date where total cash flow is positive) is Q1 2023.  The recession prolongs the payback period, with the expected payback date being Q3 2025, an additional 2.5 years.  Further, the total cash returned to investors is much lower.

This lower cash returned as well as the lengthening of the payback period have a dramatic effect on the fund IRR.

That small recession drops the expected IRR of the fund a full 7% annualized.  The distribution shown in the box and whisker plot above illustrates the dramatic shift in possible outcomes.  Whereas before, there were only a few scenarios where the fund returned a negative IRR, in the recession nearly a quarter of all scenarios produced a negative return.  There are more than a few cases where the fund’s IRR is well below -10% annually!

This type of analysis should provide investors in private capital food for thought.  How well do your return expectations hold up during an economic slowdown?  What does the distribution of expected cash flows and returns tell you about the risk in your portfolio?

At FRG, we specialize in helping people answer these questions.  If you would like to learn more, please visit www.frgrisk.com/vor-pcf  or contact us.

Dominic Pazzula is a Director with FRG, specializing in asset allocation and risk management. He has more than 15 years of experience evaluating risk at a portfolio level and managing asset allocation funds. He is responsible for product design of FRG’s asset allocation software offerings and consults with clients helping to apply the latest technologies to solve their risk, reporting, and allocation challenges.

 

 

How Embracing SR 11-7 Guidelines Can Support the CECL Process

The Board of Governors of the Federal Reserve System’s SR 11-7 supervisory guidance (2011) provides an effective model risk management framework for financial institutions (FI’s). SR 11-7 covers everything from the definition of a model to the robust policies/procedures that should exist within a model risk management framework. To reduce model risk, any FI should consider following the guidance throughout internal and regulatory processes as its guidelines are comprehensive and reflect a banking industry standard.

The following items and quotations represent an overview of the SR 11-7 guidelines (Board of Governors of the Federal Reserve System, 2011):

  1. The definition of a model – “the term model refers to a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”
  2. A focus on the purpose/use of a model – “even a fundamentally sound model producing accurate outputs consistent with the design objective of the model may exhibit high model risk if it is misapplied or misused.”
  3. The three elements of model risk management:
    • Robust model development, implementation, and use – “the design, theory, logic underlying the model should be well documented and generally supported by published research and sound industry practice.”
    • Sound model validation process – “an effective validation framework should include three core elements: evaluation of conceptual soundness, …, ongoing monitoring, …, and benchmarking, outcomes analysis, …”
    • Governance – “a strong governance framework provides explicit support and structure to risk management functions through policies defining relevant risk management activities, procedures that implement those policies, allocation of resources, and mechanisms for evaluating whether policies and procedures are being carried out as specified.”

The majority of what the SR 11-7 guidelines discuss applies to some of the new aspects from the accounting standard CECL (FASB, 2016). Any FI under CECL regulation must provide explanations, justifications, and rationales for the entirety of the CECL process including (but not limited to) model development, validation, and governance. The SR 11-7 guidelines will help FI’s develop effective CECL processes in order to limit model risk.

Some considerations from the SR 11-7 guidelines in regards to the components of CECL include (but are not limited to):

  • Determining appropriateness of data and models for CECL purposes. Existing processes may need to be modified due to some differing CECL requirements (e.g., life of loan loss estimation).
  • Completing comprehensive documentation and testing of model development processes. Existing documentation may need to be updated to comply with CECL (e.g., new models or implementation processes).
  • Accounting for model uncertainty and inaccuracy through the understanding of potential limitations/assumptions. Existing model documentation may need to be re-evaluated to determine if new limitations/assumptions exist under CECL.
  • Ensuring validation independence from model development. Existing validation groups may need to be further separated from model development (e.g., external validators).
  • Developing a strong governance framework specifically for CECL purposes. Existing policies/procedures may need to be modified to ensure CECL processes are being covered.

The SR 11-7 guidelines can provide FI’s with the information they need to start their CECL process. Although not mandated, following these guidelines overall is important in reducing model risk and in establishing standards that all teams within and across FI’s can follow and can regard as a true industry standard.

Resources:

  1. Board of Governors of the Federal Reserve System. “SR 11-7 Guidance on Model Risk Management”. April 4, 2011.
  2. Daniel Brown and Dr. Craig Peters. “New Impairment Model: Governance Considerations”. Moody’s Analytics Risk Perspectives. The Convergence of Risk, Finance, and Accounting: CECL. Volume VIII. November 2016.
  3. Financial Accounting Standards Board (FASB). Financial Instruments – Credit Losses (Topic 326). No. 2016-13. June 2016.

Samantha Zerger, business analytics consultant with FRG, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

Improve Your Problem-Solving Skills

This is the fifth post in an occasional series about the importance of technical communication in the workplace.

 “Work organisations are not only using and applying knowledge produced in the university but they are also producing, transforming, and managing knowledge by themselves to create innovations (Tynjälä, Slotte, Nieminen, Lonka, & Olkinuora, 2006)”.

Problem-solving skills are rooted in the fact that you must learn how to think, not what to think. Most classes in high schools and colleges teach you what to think (e.g., history dates, mathematical equations, grammar rules), but you must learn further problem-solving skills in order to help you learn how to think.

In the technical workplace, you are expected to be able to be given a problem and come up with a solution; a solution that possibly has never been thought of before. Employers are looking for people that have the right skills in order to do that very thing. Because of this, most interview processes will inevitably include at least one problem-solving question.

  • “How have you handled a problem in your past? What was the result?”
  • “How would you settle the concerns of a client?”
  • “How would you handle a tight deadline on a project?”

The way you answer the problem-solving question usually gives the interviewer a good sense of your problem-solving skills. Unfortunately, for the interviewee, problem solving is grouped into a BROAD skill set made up of:

  • Active listening: in order to identify that there is a problem
  • Research: in order to identify the cause of the problem
  • Analysis: in order to fully understand the problem
  • Creativity: in order to come up with a solution, either based on your current knowledge (intuitively) or using creative thinking skills (systematically)
  • Decision making: in order to make a decision on how to solve the problem
  • Communication: in order to communicate the issue or your solution to others
  • Teamwork: in order to work with others to solve the problem
  • Dependability: in order to solve the problem in a timely manner

So how do you, as the interviewee, convey that you have good problem-solving skills? First, acknowledge the skill set needed to solve the problem relating to each step in the problem-solving process:

Step in Problem SolvingSkill Set Needed
1. Identifying the problemActive listening, research
2. Understanding and structuring the problemAnalysis
3. Searching for possible solutions or coming up with your own solutionCreativity, communication
4. Making a decisionDecision making
5. Implementing a solutionTeamwork, dependability, communication
6. Monitoring the problem and seeking feedbackActive listening, dependability, communication

Then, note how you are either planning to or are improving your problem-solving skills. This may include gaining more technical knowledge in your field, putting yourself in new situations where you may need to problem solve, observing others who are known for their good problem-solving skills, or simply practicing problems on your own. Problem solving involves a diverse skill set and is key to surviving in a technical workplace.

Resources:

  1. Problem-Solving Skills: Definitions and Examples. Indeed Career Guide.
  2. Tynjälä, Päivi & Slotte, Virpi & Nieminen, Juha & Lonka, Kirsti & Olkinuora, Erkki. (2006). From university to working life: Graduates’ workplace skills in practice.

 

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

 

The Importance of Good Teamwork in the Technical Workplace

This is the fourth post in an occasional series about the importance of technical communication in the workplace.

Daily teamwork is an essential part of technical workplace success. Strong technical communication and collaboration skills are necessary to be an active and successful member of a team working to achieve a common goal.

When thinking about the term teamwork, the collaborative effort of a team, many associate Tuckman’s stages (1965) of team development: forming, storming, norming, and performing. In 1977, Tuckman added a fifth stage: adjourning.

  1. Forming – This stage involves the meeting of all the team members, discussions of each member’s skills, and dividing up of responsibilities.
  2. Storming – This stage involves the sharing of ideas among the members. Team leaders must resolve any challenges/competition between members and ensure that the project is moving forward.
  3. Norming – This stage involves the complete understanding of a common goal and responsibilities among members. Major conflicts are resolved.
  4. Performing – This stage involves the team members working efficiently together with minimal issues. The project work should continue on schedule.
  5. Adjourning – This stage involves the team members reaching their project end and going their separate ways onto new projects.

Although Tuckman’s stages represent a standard team development flow, there is much more to think about as a member of that team. How should I converse with others during a conflict in the Storming stage? How should I discuss my skills with other members in the Forming stage? How do I ensure that I do not fall behind the project schedule in the Performing stage?

Here are some tips that may help you in the different stages of team development:

  • Be flexible in your work. In the Forming stage, you may be asked to complete a task that you may not particularly enjoy. Thus, in order to be a good team member, you must be flexible enough to say that you will complete the task to reach the team’s common goal.
  • Complete your tasks in a timely manner. In the Performing stage, keep track of your own responsibilities and when the tasks are due. Communicate freely with the leader of the team throughout the stage to keep up-to-date on the team’s activities. If possible, finish your tasks early and offer help to other team members.
  • Avoid conflicts. In the Storming stage, some conflict is certain but there are ways to avoid larger conflicts with other team members.
    • Be aware of other member’s attitudes towards certain topics. Speak about those topics in smaller settings.
    • Offer compromises when tensions start to rise. The compromises might seem more appealing than the associated conflicts.
    • Attempt to resolve conflicts as soon as possible. The quicker they are resolved, the quicker the project can move forward.
    • Communicate face-to-face. Sometimes words get lost in translation.
  • Communicate often. Throughout the stages, make it a point to communicate with the team leader and other members of the team on a continual basis. This may include sending your status updates to the team leader, asking questions about your tasks, or simply checking in with other members.

All in all, it is important to be a team player. Every team member should be on the same side: the one that completes the project efficiently, successfully, and with minimal headaches.

Resources

  • Tuckman, B. W. (1965). Development sequence in small groups. Psychological Bulletin, 63, 384-399.

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

Improving Verbal and Nonverbal Communication

This is the third post in an occasional series about the importance of technical communication in the workplace.

The ability to speak and portray yourself professionally, efficiently, and effectively are the most important business skills any person can have other than skills required by a job title. Those who have impressive verbal/non-verbal communication skills on day one of a new job or during an interview stand out as strong, confident employees or candidates. Those who give a strong presentation during a conference or to coworkers stand out as experts in their field. Those who speak up and intelligently portray their thoughts on a matter stand out as leaders.

Communicating with others during an interview, presentation, or meeting can be daunting for some, especially if they lack experience in a work environment. Those with more experience in the workplace tend to understand how to portray the necessary information in an effective way. Usually these people are managers, executives or, in general, leaders because they are able to communicate in order to get the results they need, whether it be completed tasks or motivated employees.

It is important to note that the term verbal communication relates to the simple use of sounds and words, whereas the term non-verbal communication consists of other forms of communication. The way a person talks is not the only concern; the way a person physically portrays themselves is also important.

Technical Communication Today categorizes a person’s act of presenting a topic to a group by body language, appearance, voice, rhythm and tone. The categories can also be translated to include any type of interaction in the workplace, whether it be one-on-one (e.g., interview), one-to-many (e.g., presentation), or many-to-many (e.g., social gathering). Those five categories help frame the tips we can present to those who lack experience or who want to improve their verbal and non-verbal communication skills.

Focus on your body language
  • Control your facial expressions and think about how they are being conveyed to others. Is your facial expression conveying that you are happy or displeased?
  • Stand/sit up straight and drop your shoulders. Usually those who are nervous tend to slouch and raise their shoulders which limits airflow.
  • Make eye contact with others when they are speaking to you. This indicates that you are paying attention and that you are interested in what they are saying.
Dress for success
  • Dress appropriately for the occasion. Ensure that what you are wearing matches the importance of the situation. When in doubt, dress a level better than you expect others to dress.
Control your voice
  • Enunciate your words and phrases. This ensures that others are hearing you correctly.
  • Project your voice and speak louder than normal when presenting to a large group of people.
Focus on your speaking rhythm
  • Do not be afraid of silence. Embrace the silent moments and make them work in your favor. Use them to emphasize your main points or use them to avoid the um’s and ah’s of nervousness (i.e., think before you speak).
  • Slow your speech. Usually those who are nervous tend to speak too quickly.
Choose a tone for your voice
  • Select a tone based off of the image you want to project; e.g., professional, passionate, pleased.

Also, practice makes perfect. Do not assume that simply reading the above tips is sufficient in improving your skills. Practice and actively concentrate on them on a daily basis to further improve your verbal/non-verbal communication skills. The more you practice, the more comfortable you will be in professional settings and in front of an audience.

 

Resources:

  • Richard Johnson-Sheehan. “Technical Communication Today: Special Edition for Society for Technical Communication Foundation Certification”. Fifth Edition.

 

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

 

Improving Business Email Etiquette

This is the second post in an occasional series about the importance of technical communication in the workplace.

According to The Radicati Group, Inc., based on a worldwide study in 2015, the number of business emails sent and received per user, per day totals 122, with a circulation of 112.5 billion worldwide. These statistics should reflect how much businesses rely on email communication skills on a daily basis. Because of the massive influx of emails, any employee at your workplace could most likely list three pet peeves of theirs regarding email communication. The following are the answers I got from a few FRG employees:

  • Emails that have a missing subject line or have no content
  • Emails that do not have a clear response to your question
  • Emails that do not get to the point quickly or are superfluous

How do we ensure that we are not the employees that are sending the above types of emails? How do we ensure that we are taking advantage of this easy communication tool to be efficient, productive, and constructive in the workplace? How do we ensure that we are communicating in a professional manner?

Follow these rules (in no particular order) on email etiquette to make sure you are sending correct and understandable information.

  1. Keep it simple. Use succinct sentences that get promptly to the point.
  2. Be professional. If you are not positive the receiver of the email knows who you are, briefly introduce yourself (e.g., state your name, job title, and purpose of email).
  3. Make it standalone. Suspect that the person did not read previous emails in the thread. Refresh their memory first on what the discussion was and then continue.
  4. Read the entire email before sending. Ensure that there are no typos and that the content makes sense.
  5. Make no assumptions. Do not assume that others understand what you are saying. Be clear in your statements/questions.
  6. Be consistent. Include a clear and intuitive subject and body content. Ensure that terms are being referenced the same in email threads to avoid confusion (e.g., Financial Risk Group vs. FRG).
  7. Always consider lists. Use bulleted lists to directly group lists, steps, questions, etc. Use numerical or alphabetical lists for items that need to be in a specific order and bullets for items that do not.
  8. Use parallel structure. Construct sentences so that readers can understand difficult concepts more quickly.
    • Parallel structure is especially important when writing lists. Begin each statement with the same part of speech. For example, if explaining steps in a process, use verbs such as type, click, or close to begin each statement.
    • Parallel structure can be used in comparisons. Repeat the same phrases in order to be clear. For example, the new user interface is more user-friendly than the old user interface.
    • Parallel structure can help define the format and/or layout. Repeat the same format and/or layout to ensure consistent organization. For example, if you include a bolded header for one topic, use a bolded header for each topic.

The above rules can be applied to emails sent to any reader, whether it be a co-worker, boss, client, future employer, etc. It is ultimately important to send clear, understandable statements and questions to ensure you receive a productive and expected response.

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

CECL – The Power of Vintage Analysis

I would argue that a critical step in getting ready for CECL is to review the vintage curves of the segments that have been identified. Not only do the resulting graphs provide useful information but the process itself also requires thought on how to prepare the data.

Consider the following graph of auto loan losses for different vintages of Not-A-Real-Bank bank[1]:

 

While this is a highly-stylized depiction of vintage curves, its intent is to illustrate what information can be gleaned from such a graph. Consider the following:

  1. A clear end to the seasoning period can be determined (period 8)
  2. Outlier vintages can be identified (2015Q4)
  3. Visual confirmation that segmentation captures risk profiles (there aren’t a substantial number of vintages acting odd)

But that’s not all! To get to this graph, some important questions need to be asked about the data. For example:

  1. Should prepayment behavior be captured when deriving the loss rates? If so, what’s the definition of prepayment?
  2. At what time period should the accumulation of losses be stopped (e.g., contractual term)?
  3. Is there enough loss[2] behavior to model on the loan level?
  4. How should accounts that renew be treated (e.g., put in new vintage)?

In conclusion, performing vintage analysis is more than just creating a picture with many different colors. It provides insight into the segments, makes one consider the data, and, if the data is appropriately constructed, positions one for subsequent analysis and/or modeling.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] Originally I called this bank ACME Bank but when I searched to see if one existed I got this, this, and this…so I changed the name. I then did a search of the new name and promptly fell into a search engine rabbit hole that, after a while, I climbed out with the realization that for any 1 or 2 word combination I come up with, someone else has already done the same and then added bank to the end.

[2] You can also build vintage curves on defaults or prepayment.

 

RELATED:

CECL—Questions to Consider When Selecting Loss Methodologies

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

CECLData (As Usual) Drives Everything

IFRS 17: Killing Two Birds

Time is ticking for the 450 insurers around the world to comply with the International Financial Reporting Standard 17 (IFRS 17) by January 1, 2021 for companies with their financial year starting on January 1.

Insurers are at different stages of preparation, ranging from performing gap analyses, to issuing requirements to software and consulting vendors, to starting the pilot phase with a new IFRS 17 system, with a few already embarking on implementing a full IFRS 17 system.

Unlike the banks, the insurance industry has historically spent less on large IT system revamps. This is in part due to the additional volume, frequency and variety of banking transactions compared to insurance transactions.

IFRS 17 is one of the biggest ‘people, process and technology’ revamp exercises for the insurance industry in a long while. The Big 4 firms have published a multitude of papers and videos on the Internet highlighting the impact of the new reporting standard on insurance contracts that was issued by the IASB in May 2017. In short, it is causing a buzz in the industry.

As efforts are focused on ensuring regulatory compliance to the new standard, insurers must also ask: “What other strategic value can be derived from our heavy investment in time, manpower and money in this whole exercise?”

The answer—analytics to gain deeper business insights.

One key objective of IFRS 17 is to provide information at a level of granularity that helps stakeholders assess the effect of insurance contracts on financial position, financial performance and cash flows, increasing transparency and comparability.

Most IFRS 17 systems in the market today achieves this by bringing the required data into the system, compute, report and integrate to the insurer’s GL system. From a technology perspective, such systems will comprise a data management tool, a data model, a computation engine and a reporting tool. However, most of these systems are not built to provide strategic value beyond pure IFRS 17 compliance.

Apart from the IFRS 17 data, an insurer can use this exercise to put in place an enterprise analytics platform that caters beyond IFRS 17 reporting, to broader and deeper financial analytics, to customer analytics, operational and risk analytics. To leverage on new predictive analytics technologies like machine learning and artificial intelligence, a robust enterprise data platform to house and make available large volumes of data (big data) is crucial.

Artificial Intelligence can empower important processes like claims analyses, asset management, risk calculation, and prevention. For instance, better forecasting of claims experience based on a larger variety and volume of real-time data. The same machine can be used to make informed decisions about investments based on intelligent algorithms, among other use cases.

As the collection of data becomes easier and more cost effective, Artificial Intelligence can drive whole new growths for the insurance industry.

The key is centralizing most of your data onto a robust enterprise platform to allow cross line of business insights and prediction.

As an insurer, if your firm has not embarked on such a platform, selecting a robust system that can cater to IFRS 17 requirements AND beyond will be a case of killing 2 birds with one stone.

FRG can help you and your teams get ready for IFRS 17.  Contact us today for more information.

Tan Cheng See is Director of Business Development and Operations for FRG.

Top 6 Things To Consider When Creating a Data Services Checklist

“Data! Data! Data! I can’t make bricks without clay.”
— Sherlock Holmes, in Arthur Conan Doyle’s The Adventure of the Copper Beeches

You should by now have a solid understanding of the growth of and history of data, data challenges and how to effectively manage themwhat data as a service (DaaS) is, how to optimize data using both internal and  external data sources, and the benefits of using DaaS. In our final post of the series, we will discuss the top six things to consider when creating a Data Services strategy.

Let’s break this down into two sections: 1) pre-requisites and 2) the checklist.

Prerequisites

We’ve identified four crucial points below to consider prior to starting your data services strategy. These will help frame and pull together the sections of information needed to build a comprehensive strategy to move your business towards success.

Prerequisites:

1: View data as a strategic business asset

 In the age of data regulation including BCBS 239 principles for effective risk data aggregation and risk reporting, GDPR and others, data, especially that relating to an individual, is considered an asset that must be managed and protected. It also can be aggregated, purchased, traded and legally shared to create bespoke user experiences and engage in more targeted business decisions. Data must be classified and managed with the appropriate level of governance in the same vein as other assets, such as people, processes and technology. Being in this mindset and appreciating the value of data and recognizing that not all data is alike and must be manged appropriately will ultimately ensure business success in a data-driven world.

2: Ensure executive buy-in, senior sponsorship and support

As with any project, having executive buy-in is required to ensure top down adoption. However, partnering with business line executives who create data and are power users of it can help champion its proper management and reuse in the organization. This assists in achieving goals and ensuring project and business success. The numbers don’t lie: business decisions should be driven by data.

3: Have a defined data strategy and target state that supports the business strategy

Having data for the sake of it won’t provide any value; rather, a clearly-defined data strategy and target state which outlines how data will support the business will allow for increased user buy in and support. This strategy must include and outline:

  • A Governance Model
  • An Organization chart with ownership, roles and responsibility, and operations; and
  • Goals for data accessibility and operations (or data maturity goals)

If these sections are not agreed from the start, uncertainty, overlapping responsibilities, duplication of data and efforts as well as regulatory or potentially legal issues may arise.

4: Have a Reference Data Architecture to Demonstrate where Data Services Fit

Understanding the architecture that supports data and data maturity goals, including the components that are required to support the management of data from acquisition through distribution and retirement is critical. It is also important to understand how they fit into the overall architecture and infrastructure of the technology at the firm.  Defining a clear data architecture and its components including:

  • Data model(s)
  • Acquisition
  • Access
  • Distribution
  • Storage
  • Taxonomy

are required prior to integration of the data.

5. Data Operating Model – Understanding how the Data Transverses the Organization

It is crucial to understand the data operations and operating model – including who does what to the data and how the data ownership changes over time or transfers among owners. Data lineage is key – where your data came from, its intended use, who has/is allowed to access it and where it goes inside or outside the organization – to keep it clean and optimize its use. Data quality tracking, metrics and remediation will be required.

Existing recognized standards such as the Global Legal Entity Identifier (LEI) that can be acquired and distributed via data services can help in the sharing and reuse of data that is ‘core’ to the firm. They can also assist in tying together data sets used across the firm.

Checklist/Things to Consider

Once you’ve finished the requirements gathering and understand the data landscape, including roles and responsibilities described above, you’re now ready to begin putting together your data services strategy. To build an all-encompassing strategy, the experts suggest inclusion of the following.

1:  Defined Data Services Required

  •  Classification: core vs. business shared data and ownership
    • Is everyone speaking a common language?
    • What data is ‘core’ to the business, meaning it will need to be commonly defined and used across the organization?
    • What data will be used by a specific business that may not need to be uniformly defined?
    • What business-specific data will be shared across the organization, which may need to be uniformly defined and might need more governance?
  • Internal vs external sourcing
    • Has the business collected or created the data themselves or has it been purchased from a 3rd party? Are definitions, metadata and business rules defined?
    • Has data been gathered or sourced appropriately and with the correct uniform definitions, rules, metadata and classification, such as LEI?
  • Authoritative Data Sources for the Data Services
    • Have you documented where, from whom, when etc. the data was gathered (from Sources of Record or Sources of Origin)? For example, the Source of Origin might be a trading system, an accounting system or a payments system. The general ledger might be the Source of Record for positions.
    • Who is the definitive source (internal/external)? Which system?
  • Data governance requirements
    • Have you adhered to the proper definitions, rules, and standards set in order to handle data?
    • Who should be allowed to access the data?
    • Which applications (critical, usually externally facing) applications must access the data directly?
  • Data operations and maintenance
    • Have you kept your data clean and up to date?
    • Are you up to speed with regulations, such as GDPR, and successfully obtained explicit consent for the information?
    • Following your organization chart and rules and requirements detailed above, are the data owners known, informed and understand they are responsible for making sure their data maintains its integrity?
    • Are data quality metrics monitored with a process to correct data issues?
    • Do all users with access to the data know who to speak to if there is a data quality issue and know how to fix it?
  • Data access, distribution and quality control requirements
    • Has the data been classified properly? Is it public information? If not, is it restricted to those who need it?
    • Have you defined how you share data between internal/external parties?
    • Have the appropriate rules and standards been applied to keep it clean?
    • Is there a clearly defined process for this?
  • Data integration requirements
    • If the data will be merged with other data sets/software, have the data quality requirements been met to ensure validity?
    • Have you prioritized the adoption of which applications must access the authoritative data distributed via data services directly?
    • Have you made adoption easy – allowing flexible forms of access to the same data (e.g., via spreadsheets, file transfers, direct APIs, etc.)?

2: Build or Acquire Data Services

 To recap, are you building or acquiring your own Data Services? Keep in mind the following must be met and adhere to compliance:

  • Data sourcing and classification, assigning ownership
  • Data Access and Integration
  • Proper Data Services Implementation, access to authoritative data
  • Proper data testing, and data remediation, keeping the data clean
  • Appropriate access control and distribution of the data, flexible access
  • Quality control monitoring
  • Data issue resolution process

The use and regulations around data will be constantly evolving as will the number of users data can support in business ventures. We hope that this checklist will provide a foundation towards building and supporting your organization’s data strategies. If there are any areas you’re unclear on, don’t forget to take a look back through our first five blogs which provide more in-depth overviews on the use of data services to support the business.

Thank you for tuning into our first blog series on data management. We hope that you found it informative but most importantly useful towards your business goals.

If you enjoyed our blog series or have questions on the topics discussed, write to us on Twitter@FRGRISK.

Dessa Glasser is a Principal with the Financial Risk Group, and an independent board member of Oppenheimer & Company, who assists Virtual Clarity, Ltd. on data solutions as an Associate. 

 

RELATED:

Data Is Big, Did You Know?

Data Management – The Challenges

Data as a Service (DaaS) Solution – Described

Data as a Service (DaaS) Data Sources – Internal or External?

Data as a Service (DaaS) – The Benefits

Is Your Business Getting The Full Bang for Its CECL Buck?

Accounting and regulatory changes often require resources and efforts above and beyond “business as usual”, especially those like CECL that are significant departures from previous methods. The efforts needed can be as complex as those for a completely new technology implementation and can take precedence over projects that are designed to improve your core business … and stakeholder value.

But with foresight and proper planning, you can prepare for a change like CECL by leveraging resources in a way that will maximize your efforts to meet these new requirements while also enhancing business value. At Financial Risk Group, we take this approach with each of our clients. The key is to start by asking “how can I use this new requirement to generate revenue and maximize business performance?”

 

The Biggest Bang Theory

In the case of CECL, there are two significant areas that will create the biggest institution-wide impact: analytics and data governance. While the importance of these is hardly new to financial institutions, we are finding that many neglect to leverage their CECL data and analytics efforts to create that additional value. Some basic first steps you can take include the following.

  • Ensure that the data utilized is accurate and that its access and maintenance align to the needs and policies of your business. In the case of CECL these will be employed to create scenarios, model, and forecast … elements that the business can leverage to address sales, finance, and operational challenges.
  • For CECL, analytics and data are leveraged in a much more comprehensive fashion than previous methods of credit assessment provided.  Objectively assess the current state of these areas to understand how the efforts being put toward CECL implementation can be leveraged to enhance your current business environment.
  • Identify existing available resources. While some firms will need to spend significant effort creating new processes and resources to address CECL, others will use this as an opportunity to retire and re-invent current workflows and platforms.

Recognizing the business value of analytics and data may be intuitive, but what is often less intuitive is knowing which resources earmarked for CECL can be leveraged to realize that broader business value. The techniques and approaches we have put forward provide good perspective on the assessment and augmentation of processes and controls, but how can these changes be quantified? Institutions without in-house experienced resources are well advised to consider an external partner. The ability to leverage expertise of staff experienced in the newest approaches and methodologies will allow your internal team to focus on its core responsibilities.

Our experience with this type of work has provided some very specific results that illustrate the short-term and longer-term value realized. The example below shows the magnitude of change and benefits experienced by one of our clients: a mid-sized North American bank. A thorough assessment of its unique environment led to a redesign of processes and risk controls. The significant changes implemented resulted in less complexity, more consistency, and increased automation. Additionally, value was created for business units beyond the risk department. While different environments will yield different results, those illustrated through the methodologies set forth here provide a good example to better judge the outcome of a process and controls assessment.

 

 Legacy EnvironmentAutomated Environment
Reporting OutputNo daily available manual controls for risk reportingDaily in-cycle reporting controls are automated with minimum manual interaction
Process SpeedCredit run 40+ hours
Manually-input variables prone to mistakes
Credit run 4 hours
Cycle time reduced from 3 days to 1 for variable creation
Controls & AuditMultiple audit issues and Regulatory MRAsAudit issues resolved and MRA closed
Model ExecutionSpreadsheet driven90 models automated resulting in 1,000 manual spreadsheets eliminated

 

While one approach will not fit all firms, providing clients with an experienced perspective on more fully utilizing their specific investment in CECL allows them to make decisions for the business that might otherwise never be considered, thereby optimizing the investment in CECL and truly ensuring you receive the full value from your CECL buck.

More information on how you can prepare for—and drive additional value through—your CECL preparation is available on our website and includes:

White Paper – CECL: Why the expectations are different

White Paper – CECL Scenarios: Considerations, Development and Opportunities

Blog – Data Management: The Challenges

Subscribe to our blog!