Is Your Business Getting The Full Bang for Its CECL Buck?

Accounting and regulatory changes often require resources and efforts above and beyond “business as usual”, especially those like CECL that are significant departures from previous methods. The efforts needed can be as complex as those for a completely new technology implementation and can take precedence over projects that are designed to improve your core business … and stakeholder value.

But with foresight and proper planning, you can prepare for a change like CECL by leveraging resources in a way that will maximize your efforts to meet these new requirements while also enhancing business value. At Financial Risk Group, we take this approach with each of our clients. The key is to start by asking “how can I use this new requirement to generate revenue and maximize business performance?”

 

The Biggest Bang Theory

In the case of CECL, there are two significant areas that will create the biggest institution-wide impact: analytics and data governance. While the importance of these is hardly new to financial institutions, we are finding that many neglect to leverage their CECL data and analytics efforts to create that additional value. Some basic first steps you can take include the following.

  • Ensure that the data utilized is accurate and that its access and maintenance align to the needs and policies of your business. In the case of CECL these will be employed to create scenarios, model, and forecast … elements that the business can leverage to address sales, finance, and operational challenges.
  • For CECL, analytics and data are leveraged in a much more comprehensive fashion than previous methods of credit assessment provided.  Objectively assess the current state of these areas to understand how the efforts being put toward CECL implementation can be leveraged to enhance your current business environment.
  • Identify existing available resources. While some firms will need to spend significant effort creating new processes and resources to address CECL, others will use this as an opportunity to retire and re-invent current workflows and platforms.

Recognizing the business value of analytics and data may be intuitive, but what is often less intuitive is knowing which resources earmarked for CECL can be leveraged to realize that broader business value. The techniques and approaches we have put forward provide good perspective on the assessment and augmentation of processes and controls, but how can these changes be quantified? Institutions without in-house experienced resources are well advised to consider an external partner. The ability to leverage expertise of staff experienced in the newest approaches and methodologies will allow your internal team to focus on its core responsibilities.

Our experience with this type of work has provided some very specific results that illustrate the short-term and longer-term value realized. The example below shows the magnitude of change and benefits experienced by one of our clients: a mid-sized North American bank. A thorough assessment of its unique environment led to a redesign of processes and risk controls. The significant changes implemented resulted in less complexity, more consistency, and increased automation. Additionally, value was created for business units beyond the risk department. While different environments will yield different results, those illustrated through the methodologies set forth here provide a good example to better judge the outcome of a process and controls assessment.

 

 Legacy EnvironmentAutomated Environment
Reporting OutputNo daily available manual controls for risk reportingDaily in-cycle reporting controls are automated with minimum manual interaction
Process SpeedCredit run 40+ hours
Manually-input variables prone to mistakes
Credit run 4 hours
Cycle time reduced from 3 days to 1 for variable creation
Controls & AuditMultiple audit issues and Regulatory MRAsAudit issues resolved and MRA closed
Model ExecutionSpreadsheet driven90 models automated resulting in 1,000 manual spreadsheets eliminated

 

While one approach will not fit all firms, providing clients with an experienced perspective on more fully utilizing their specific investment in CECL allows them to make decisions for the business that might otherwise never be considered, thereby optimizing the investment in CECL and truly ensuring you receive the full value from your CECL buck.

More information on how you can prepare for—and drive additional value through—your CECL preparation is available on our website and includes:

White Paper – CECL: Why the expectations are different

White Paper – CECL Scenarios: Considerations, Development and Opportunities

Blog – Data Management: The Challenges

Current Expected Credit Loss (CECL) a New Paradigm for Captives, Too

The ramifications of CECL on Financial Institutions has in large part focused on Banks, but as we addressed in a recent paper, “Current Expected Credit Loss: Why the Expectations Are Different,” this new accounting treatment extends to a much larger universe.  An example of this are the captives that finance American’s love affair with cars; their portfolios of leases and loans have become much larger and the implications of CECL more significant.

As with other institutions, data, platforms, and modeling make up the challenges that captives will have to address.  But unlike other types of institutions captives have more concentrated portfolios, which may aid in “pooling” exercises, but may be inadvertently affected by scenario modeling.  A basic tenet for all institutions is the life-of-loan estimate and the use of reasonable and supportable forecasts.  While some institutions may have had “challenger” models in the past that moved in this direction, captives have not tended to utilize this type of approach in the past.

The growth of captives portfolios and the correlation to a number of macro-economic factors (e.g. interest rates, commodity prices, tariffs, etc.) call for data and scenarios that require a different level of modeling and forecasting.  Because FASB does not provide template methodologies or calculations it will be necessary to develop these scenarios with the mindset of the “reasonable and supportable” requirement.  While different approaches will likely be adopted, those that utilize transaction level data have the ability to provide a higher level of accuracy over time, resulting in the goals laid out in the new guidelines.  As might be imagined the ability to leverage experience in the development and deployment of these types of models can’t be overemphasized.

We have found that having the ability to manage the following functional components of the platform are critical to building a flexible platform that can manage the changing needs of the users:

  • Scenario Management
  • Input Data Mapping and Registry
  • Configuration Management
  • Model Management

Experience has taught that there are significant considerations in implementing CECL, but there are also some improvements that can be realized for institutions that develop a well-structured plan. Captives are advised to use this as an opportunity to realize efficiencies, primarily in technology and existing models. Considerations around data, platforms, and the models themselves should leverage available resources to ensure that investments made to address this change provide as much benefit as possible, both now and into the future.

Real Time Learning: A Better Approach to Trader Surveillance

An often-heard question in any discussion of Machine Learning (ML) tools is maybe most obvious one: “So, how can we use them?”

The answer depends on the industry, but we think there are especially useful (and interesting) applications for the financial services sector. These consumers have historically been open to the ML concept but haven’t been quick to jump on some potential solutions to common problems.

Let’s look at risk management at the trading desk, for example. If you want to mitigate risk, you need to be able to identify it in advance—say, to insure your traders aren’t conducting out-of-market transactions or placing fictitious orders. The latest issue of the New Machinist Journal by Dr. Jimmie Lenz (available by clicking here) explains how. Trade Desk Surveillance is just one way that Machine Learning tools can help monitor a variety of activities that can cause grief for those tasked with risk management.

Would you like to read more about the possibilities ML can bring to financial services process settings? Download “Real Time Learning: A Better Approach to Trader Surveillance,” along with other issues of the New Machinist Journal, by visiting www.frgrisk.com/resources.

Forecasting Capital Calls and Distributions

Early in his career, one of us was responsible for cash flow forecasting and liquidity management at a large multiline insurance company. We gathered extensive historical data on daily concentration bank deposits, withdrawals, and balances and developed an elementary but fairly effective model. Because insurance companies receive premium payments from and pay claims to many thousands of individuals and small companies, we found we could base reasonably accurate forecasts on the quarter of the year, month of the quarter, week of the month, and day of the week, taking holidays into account. This rough-and-ready approach enabled the money market traders to minimize overnight balances, make investment decisions early in the morning, and substantially extend the average maturity of their portfolios. It was an object lesson in the value of proactive cash management.

It is not such a trivial matter for investors in private capital funds to forecast the timing and amount of capital calls and distributions. Yet maintaining adequate liquidity to meet obligations as they arise means accepting either a market risk or an opportunity cost that might be avoided. The market risk comes from holding domestic large-cap stocks that will have to be sold quickly, whatever the prevailing price, when a capital commitment is unexpectedly drawn down; the opportunity cost comes from adopting a defensive posture and holding cash or cash equivalents in excess of the amount needed for ongoing operations, especially when short-term interest rates are very low.

FRG is undertaking a financial modeling project aimed at forecasting capital calls and distributions. Our overall objective is to help investors with outstanding commitments escape the unattractive alternatives of holding excess cash or scrambling to liquidate assets to meet contractual obligations whose timing and amount are uncertain. To that end, we seek to assist in quantifying the risks associated with allocation weights and to understand the probability of future commitments so as to keep the total portfolio invested in line with those weights.

In other words, we want to make proactive cash management possible for private fund investors.

As a first step, we have formulated some questions.

  1. How do we model the timing and amount of capital calls and disbursements? Are there exogenous variables with predictive power?
  2. How do the timing of capital calls and disbursements correlate between funds of different vintages and underlying types (e.g., private equity from venture capital to leveraged buyouts, private credit, and real estate, among others)?
  3. Do private funds’ capital calls and distributions correlate with public companies’ capital issuance and dividend payout decisions?
  4. How do we model the growth of invested capital? What best explains the returns achieved before money is returned to LPs?
  5. What triggers distributions? 
  6. How do we allocate money to private funds keeping an eye on total invested capital vs. asset allocation weights?
    1. The timing of capital calls and distributions is probabilistic (from #1). 
    2. Diversification among funds can produce a smooth invested capital profile.  But we need to know how these funds co-move to create distributions around that profile (from #2).
    3. Confounding problem is the growth of invested capital (from #3).  This growth affects total portfolio value and the asset allocation weights.  If total exposure is constrained, what is the probability of breaching those constraints?

We invite front-line investors in limited partnerships and similar vehicles to join the discussion. We would welcome and appreciate your input on the conceptual questions. Please contact Dominic Pazzula at info@frgrisk.com if you have an interest in this topic.

Turning a Blind Eye to the Risky Business of Incentive-based Sales Practices 

Should you be monitoring your sales activities to detect anomalous behaviors?

The use of sales incentives (commissions, bonuses, etc.) to motivate the behavior of salespeople has a long history in the United States.  We all hope to assume the initial structuring of incentive-based pay is not intended to have nefarious or abusive impacts on its customers but, in a number of recent and well-publicized stories of mistreatment of both customers and customer information, we have discovered that these negative consequences do exist.  Likely, the business practice of turning an administrative blind eye to the damage done to consumers as a result of these sales incentive programs has played an even greater role in the scale of abuse that has been uncovered over the last decade.  In the most recent cases of unchecked and large-scale customer abuse, with particular attention focused on the financial services industry, this business paradigm of tying employee benefits (defined as broadly tying employment and/or income potential to sales) were resolved through arbitration and frequently typecast as “a cost of doing business”.

Today, are you putting your business, and all those associated with its success at risk by turning a blind eye to the effects of your business practices, including your employee incentive programs?  There are new consequences being laid on to corporate leaders and board members for all business practices used by the company, and the defense of not knowing the intricacies and results of these practices does not protect you from these risks.

We have developed a methodology to detect both customer sales and individual product behaviors that are indicative of problematic situations that require additional examination.  Our methodology goes beyond the aggregate sales, which are primarily discussed in the literature, to highlight individuals and/or groups that are often obviated when analyzing such data.

A forthcoming  paper, “Sales Practices: Monitoring Sales Activity for Anomalous Behaviors” will explore these issues, and a resolution, in depth. Visit any of our social media channels for the link.

 

 

 

The Case for Outsourced Hosting

Middle office jobs are fascinating. In performance analysis, spotting dubious returns and tracing them back to questionable inputs requires insight that seems intuitive or innate but results in fact from a keen understanding of markets, asset classes, investment strategies, security characteristics, and portfolio dynamics. Risk management additionally calls for imagination in scenario forecasting, math and programming skills in model development, judgment in prioritizing and mitigating identified risks, and managerial ability in monitoring exposures that continually shift with market movements and the firm’s portfolio decisions. Few careers so completely engage such a wide range of talents.

Less rewarding is handling the voluminous information that feeds the performance measurement system and risk management models. Financial data management is challenging for small banks and investment managers, and it becomes more and more difficult as the business grows organically, adding new accounts, entering new markets, and implementing new strategies that often use derivatives. Not to mention the extreme data integration issues that stem from business combinations!

And data management hasn’t any upside: nobody in your chain of command notices when it’s going well, and everyone reacts when it fails.

Nonetheless, reliable data is vital for informative performance evaluation and effective risk management, especially at the enterprise level. It doesn’t matter how hard it is to collect, format, sort, and reconcile the data from custodians and market data services as well as your firm’s own systems (all too often including spreadsheets) in multiple departments. Without timely, accurate, properly classified information on all the firm’s long and short positions across asset classes, markets, portfolios, issuers, and counterparties, you can’t know where you stand. You can’t answer questions. You can’t do your job.

Adding up the direct, explicit costs of managing data internally is a straightforward exercise; the general ledger keeps track of license fees. The indirect, implicit costs are less transparent. For example, they include the portion of IT, accounting, and administrative salaries and benefits attributable to mapping data to the performance measurement system and the risk models, coding multiple interfaces, maintaining the stress testing environment, correcting security identifiers and input errors—all the time-consuming details that go into supporting the middle office. The indirect costs also include ongoing managerial attention and the potential economic impact of mistakes that are inevitable if your company does not have adequate staffing and well-documented, repeatable, auditable processes in place to support smooth performance measurement and risk management operations.

You can’t delegate responsibility for the integrity of the raw input data provided by your firm’s front office, portfolio assistants, traders, and security accountants. But you can outsource the processing of that data to a proven provider of hosting services. And then your analysts can focus on the things they do best—not managing data but evaluating investment results and enterprise risk.

Learn more about FRG’s Hosting Services here.

Spreadsheet Risk Is Career Risk

Stop and think: how much does your firm — and your work group — depend upon electronic spreadsheets to get mission-critical assignments done? How badly could a spreadsheet error damage your company’s reputation? Its financial results? Your own career?

Here’s an example. Advising Tibco Software on its sale to Vista Equity Partners, Goldman Sachs used a spreadsheet that overstated its client’s shares outstanding and, as a result, overvalued the company by $100 million. The Wall Street Journal reported, “It’s not clear who created the spreadsheet. Representatives for Tibco and Goldman declined to comment. Vista couldn’t be reached for comment.” (October 16, 2014.) Nonetheless, it’s safe to assume that the analyst who prepared the spreadsheet was identified, along with his or her manager, and that they both were penalized for the mistake.

Spreadsheets proliferate in financial organizations for good reasons. They offer convenient, flexible, and surprisingly powerful ad hoc solutions to all sorts of analytical problems. But as risk managers we are an impatient lot, and all too often results-oriented people like us turn to spreadsheets even for production applications because we cannot wait for IT resources to become available. We know that the IT department has a hard-and-fast policy of disavowing the business lines’ spreadsheets, but that’s all right, we tell ourselves, because “it’s only temporary.” Then we turn our attention to another problem….

Let’s take it as axiomatic that the firm’s risk management operations should not exacerbate the firm’s exposure to operational risk.

You may already have established some controls to mitigate spreadsheet risk in production applications. For example, key spreadsheets may be encrypted, stored on dedicated, non-networked PCs with password protection, and backed up every night. And it might be said that spreadsheets are self-documenting because the macros and formulas are visible and the functions are vendor-defined. As a practical matter, however, only the analyst who originally developed a spreadsheet fully understands it. When she leaves, and other analysts add enhancements — possibly with new names for existing variables — the spreadsheet becomes much more difficult to troubleshoot.

We recommend taking these steps now:

  • Starting in the risk management area, inventory all the spreadsheets in use across the firm’s operations.
  • Confirm that every time a spreadsheet enters a workflow it is identified as such. Cross-check the workflow documentation and swim lane diagrams against the spreadsheet inventory and update them where necessary.
  • Document every non-trivial spreadsheet, minimally including its purpose, the data sources, and any procedural tips.
  • Select the operationally embedded spreadsheets whose failure would be most injurious to departmental objectives and downstream processes, and look for permanent solutions with proper controls.

Whether or not it’s explicitly listed in your performance objectives, you owe it to your firm and yourself to migrate mission-critical spreadsheet applications to a reliable platform with codified controls. Systems development life cycle (SDLC) methodologies impose the discipline that’s needed in all phases of the project, from requirements analysis through deployment and maintenance, to minimize operational risk. This is not a trivial task; transferring the ad hoc functionality you currently have embedded in spreadsheets to a system that is well designed and amply supported takes heart. But the potential consequences of inaction are unacceptable. We strongly encourage you to take the necessary steps before a problem comes to light because a key person leaves the organization, a client spots a costly mistake, or — in the worst case — an operational crisis prevents the firm from meeting its contractual or regulatory obligations. And you lose your job.

 

Click here for information about FRG’s state-of-the-art risk modeling services and here for information about our hosting services.      

Subscribe to our blog!