Quantifying the Value of Electricity Storage

ABSTRACT: This research discusses the methodology developed to hedge volatility or identify opportunities resulting from what is normally a discussion constrained to the capital markets.  However, the demand (and the associated volatility) for electricity in the United States has never been more pronounced.  The upcoming paper, “Quantifying the Value of Electricity Storage,” will examine the factors that have led to the growth of volatility, both realized and potential.

There is widespread recognition of the value of energy storage, and new technologies promise to expand this capability for those who understand the opportunities being presented to firms involved in different areas of electricity generation. Objective tools to valuate these options, though, have been limited, as has the insight into when mitigation efforts make economic sense.

In order to answer these questions for electricity generators of all types we have created an economics-based model to address the initial acquisition of storage capacity, as well as the deployment optimization solutions, based on the unique attributes of the population served.

Links to the paper will be posted on FRG’s social media channels.

Forecasting Capital Calls and Distributions

Early in his career, one of us was responsible for cash flow forecasting and liquidity management at a large multiline insurance company. We gathered extensive historical data on daily concentration bank deposits, withdrawals, and balances and developed an elementary but fairly effective model. Because insurance companies receive premium payments from and pay claims to many thousands of individuals and small companies, we found we could base reasonably accurate forecasts on the quarter of the year, month of the quarter, week of the month, and day of the week, taking holidays into account. This rough-and-ready approach enabled the money market traders to minimize overnight balances, make investment decisions early in the morning, and substantially extend the average maturity of their portfolios. It was an object lesson in the value of proactive cash management.

It is not such a trivial matter for investors in private capital funds to forecast the timing and amount of capital calls and distributions. Yet maintaining adequate liquidity to meet obligations as they arise means accepting either a market risk or an opportunity cost that might be avoided. The market risk comes from holding domestic large-cap stocks that will have to be sold quickly, whatever the prevailing price, when a capital commitment is unexpectedly drawn down; the opportunity cost comes from adopting a defensive posture and holding cash or cash equivalents in excess of the amount needed for ongoing operations, especially when short-term interest rates are very low.

FRG is undertaking a financial modeling project aimed at forecasting capital calls and distributions. Our overall objective is to help investors with outstanding commitments escape the unattractive alternatives of holding excess cash or scrambling to liquidate assets to meet contractual obligations whose timing and amount are uncertain. To that end, we seek to assist in quantifying the risks associated with allocation weights and to understand the probability of future commitments so as to keep the total portfolio invested in line with those weights.

In other words, we want to make proactive cash management possible for private fund investors.

As a first step, we have formulated some questions.

  1. How do we model the timing and amount of capital calls and disbursements? Are there exogenous variables with predictive power?
  2. How do the timing of capital calls and disbursements correlate between funds of different vintages and underlying types (e.g., private equity from venture capital to leveraged buyouts, private credit, and real estate, among others)?
  3. Do private funds’ capital calls and distributions correlate with public companies’ capital issuance and dividend payout decisions?
  4. How do we model the growth of invested capital? What best explains the returns achieved before money is returned to LPs?
  5. What triggers distributions? 
  6. How do we allocate money to private funds keeping an eye on total invested capital vs. asset allocation weights?
    1. The timing of capital calls and distributions is probabilistic (from #1). 
    2. Diversification among funds can produce a smooth invested capital profile.  But we need to know how these funds co-move to create distributions around that profile (from #2).
    3. Confounding problem is the growth of invested capital (from #3).  This growth affects total portfolio value and the asset allocation weights.  If total exposure is constrained, what is the probability of breaching those constraints?

We invite front-line investors in limited partnerships and similar vehicles to join the discussion. We would welcome and appreciate your input on the conceptual questions. Please contact Dominic Pazzula at info@frgrisk.com if you have an interest in this topic.

IFRS 9: Evaluating Changes in Credit Risk

Determining whether an unimpaired asset’s credit risk has meaningfully increased since the asset was initially recognized is one of the most consequential issues banks encounter in complying with IFRS 9. Recall the stakes:

  • The expected credit loss for Stage 1 assets is calculated using the 12-month PD
  • The ECL for Stage 2 assets (defined as assets whose credit risk has significantly increased since they were first recognized on the bank’s books) is calculated using the lifetime PD, just as it is for Stage 3 assets (which are in default).

To make the difference more concrete, consider the following:

  • A bank extends an interest-bearing five-year loan of $1 million to Richmond Tool, a hypothetical Virginia-based tool, die, and mold maker serving the defense industry.
  • At origination, the lender estimates the PD for the next 12 months at 1.5%, the PD for the rest of the loan term at 4%, and the loss that would result from default at $750,000.
  • In a subsequent reporting period, the bank updates those figures to 2.5%, 7.3%, and $675,000, respectively.

If the loan were still considered a Stage 1 asset at the later reporting date, the ECL would be $16,875. But if it is deemed a Stage 2 or Stage 3 asset, then the ECL is $66,150, nearly four times as great.

Judging whether the credit risk underlying those PDs has materially increased is obviously important. But it is also difficult. There is a “rebuttable presumption” that an asset’s credit risk has increased materially when contractual payments are more than 30 days past due. In general, however, the bank cannot rely solely upon past-due information if forward-looking information is to be had, either on a loan-specific or a more general basis, without unwarranted trouble or expense.

The bank need not undertake an exhaustive search for information, but it should certainly take into account pertinent intelligence that is routinely gathered in the ordinary course of business.

For instance, Richmond Tool’s financial statements are readily available. Balance sheets are prepared as of a point in time; income and cash flow statements reflect periods that have already ended. Nonetheless, traditional ratio analysis serves to evaluate the company’s prospects as well as its current capital structure and historical operating results. With sufficient data, models can be built to forecast these ratios over the remaining life of the loan. Richmond Tool’s projected financial position and earning power can then be used to predict stage transitions.

Pertinent external information can also be gathered without undue cost or effort. For example, actual and expected changes in the general level of interest rates, mid-Atlantic unemployment, and defense spending are likely to affect Richmond Tool’s business prospects, and, therefore, the credit risk of the outstanding loan. The same holds true for regulatory and technological developments that affect the company’s operating environment or competitive position.

Finally, the combination of qualitative information and non-statistical quantitative information such as actual financial ratios may be enough to reach a conclusion. Often, however, it is appropriate to apply statistical models and internal credit rating processes, or to base the evaluation on both kinds of information. In addition to designing, populating, and testing mathematical models, FRG can help you integrate the statistical and non-statistical approaches into your IFRS 9 platform.

For more information about FRG’s modeling expertise, please click here.

Turning a Blind Eye to the Risky Business of Incentive-based Sales Practices 

Should you be monitoring your sales activities to detect anomalous behaviors?

The use of sales incentives (commissions, bonuses, etc.) to motivate the behavior of salespeople has a long history in the United States.  We all hope to assume the initial structuring of incentive-based pay is not intended to have nefarious or abusive impacts on its customers but, in a number of recent and well-publicized stories of mistreatment of both customers and customer information, we have discovered that these negative consequences do exist.  Likely, the business practice of turning an administrative blind eye to the damage done to consumers as a result of these sales incentive programs has played an even greater role in the scale of abuse that has been uncovered over the last decade.  In the most recent cases of unchecked and large-scale customer abuse, with particular attention focused on the financial services industry, this business paradigm of tying employee benefits (defined as broadly tying employment and/or income potential to sales) were resolved through arbitration and frequently typecast as “a cost of doing business”.

Today, are you putting your business, and all those associated with its success at risk by turning a blind eye to the effects of your business practices, including your employee incentive programs?  There are new consequences being laid on to corporate leaders and board members for all business practices used by the company, and the defense of not knowing the intricacies and results of these practices does not protect you from these risks.

We have developed a methodology to detect both customer sales and individual product behaviors that are indicative of problematic situations that require additional examination.  Our methodology goes beyond the aggregate sales, which are primarily discussed in the literature, to highlight individuals and/or groups that are often obviated when analyzing such data.

A forthcoming  paper, “Sales Practices: Monitoring Sales Activity for Anomalous Behaviors” will explore these issues, and a resolution, in depth. Visit any of our social media channels for the link.

 

 

 

IFRS 9: Modeling Challenges

Calculating expected credit losses under IFRS 9 is easy. It requires little more than high school algebra to determine the aggregate present value of future cash flows. But it is not easy to ascertain the key components that are used by the basic equation—regardless whether the approach taken is “advanced”  (i.e., where PD, LGD, and EAD are modeled) or ”simplified” (also called “intermediate”). The forward-looking stance mandated by IFRS 9 makes the inherently difficult process of specifying these variables all the more complex.

For the sake of brevity, let’s consider only the advanced approach for this discussion. There are two immediate impacts on PD model estimation: the point-in-time requirements and the length of the forecast horizon.

PD estimates need to reflect point-in-time (PIT) rather than through-the-cycle (TTC) values. What this means is that PDs are expected to represent the current period’s economic conditions instead of some average through an economic cycle. Bank risk managers will have to decide whether they can adapt a CCAR (or other regulatory) model to this purpose, determine a way to convert a TTC PD to a PIT PD, or build an entirely new model.

The length of the forecast horizon has two repercussions. First, one must consider how many models to build for estimating PDs throughout the forecast. For example, it may be determined that a portfolio warrants one model for year 1, a second model for years 2 to 3, and a third model for years 3+. Second, one should consider how far into the forecast horizon to use models. Given the impacts of model risk, along with onus of maintaining multiple models, perhaps PDs for a horizon greater than seven years would be better estimated by drawing a value from some percentile of an empirical distribution.

 

Comparatively speaking, bank risk managers may find it somewhat less difficult to estimate LGDs, especially if collateral values are routinely updated and historical recovery rates for comparable assets are readily available in the internal accounting systems. That said, IFRS 9 requires an accounting LGD, so models will need to be developed to accommodate this, or a process will have to be defined to convert an economic LGD into an accounting one.

Projecting EADs is similarly challenging. Loan amortization schedules generally provide a valid starting point, but unfortunately they are only useful for installment loans. How does one treat a revolving exposure? Can one leverage, and tweak, the same rules used for CCAR? In addition, embedded options have to be taken into account. There’s no avoiding it: estimating EADs calls for advanced financial modeling.

As mentioned above, there are differences between the requirements of IFRS 9 and those of other regulatory requirements (e.g., CCAR). As a result, the models that banks use for stress testing or other regulatory functions cannot be used as-is for IFRS 9 reporting. Bank risk managers will have to decide, then, whether their CCAR models can be adapted with relatively minor modifications. In many cases they may conclude that it makes more sense to develop new models. Then all the protocols and practices of sound model design and implementation come into play.

Of course, it is also important to explain the conceptual basis and present the supporting evidence for PD, LGD, and EAD estimates to senior management—and to have the documentation on hand in case independent auditors or regulatory authorities ask to see it.

In short, given PD, LGD, and EAD, it’s a trivial matter to calculate expected credit losses. But preparing to comply with the IFRS 9 standard is serious business. It’s time to marshal your resources.

IFRS 9: Classifying and Staging Financial Assets

Under IFRS 9, Financial Instruments, banks will have to estimate the present value of expected credit losses in a way that reflects not only past events but also current and prospective economic conditions. Clearly, complying with the 160-page standard will require advanced financial modeling skills. We’ll have much more to say about the modeling challenges in upcoming posts. For now, let’s consider the issues involved in classifying financial assets and liabilities.

The standard introduces a principles-based classification scheme that will require banks to look at financial instruments in a new way. Derivative assets are classified as “fair value through profit and loss” (FVTPL), but other financial assets have to be sorted according to their individual contractual cash flow characteristics and the business model under which they are held. Figure 1 summarizes the classification process for debt instruments. There are similar decisions to be made for equities.

The initial classification of financial liabilities is, if anything, more important because they cannot be reclassified. Figure 2 summarizes the simplest case.

That’s only the first step. Once all the bank’s financial assets have been classified they have to be sorted into stages reflecting their exposure to credit loss:

  • Stage 1 assets are performing
  • Stage 2 assets are underperforming (that is, there has been a significant increase in their credit risk since the time they were originally recognized)
  • Stage 3 assets are non-performing and therefore impaired

These crucial determinations have direct consequences for the period over which expected credit losses are estimated and the way in which effective interest is calculated. Mistakes in staging can have a very substantial impact on the bank’s credit loss provisions.

In addition to the professional judgment that any principles-based regulation or accounting standard demands, preparing data for the measurement of expected credit losses requires creating and maintaining both business rules and data transformation rules that may be unique for each portfolio or product. A moderately complex organization might have to manage hundreds of rules and data pertaining to thousands of financial instruments. Banks will need systems that make it easy to update the rules (and debug the updates); track data lineage; and extract both the rules and the data for regulators and auditors.

IFRS 9 is effective for annual periods beginning on or after January 2018. That’s only about 18 months from now. It’s time to get ready.

IFRS 9 Figure 1

IFRS 9 Figure 2

 

 

 

Risk Premia Portfolio Case Study

See how FRG’s VOR (Visualization of Risk) platform works for a major U.S. foundation: download a case study that explores how we customized VOR application tools to help them with their day-to-day portfolio management activities, as well as their monthly analysis and performance reporting.

The study shows how FRG was able to leverage its econometric expertise, system development capability and logistical strength to empower the foundation’s specialized investment team. Read the study, and learn more about VOR, here.

Managing Model Risk

The Federal Reserve and the OCC define model risk as “the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.”[1]  Statistical models are the core of stress testing and credit analysis, but banks are increasingly using them in strategic planning. And the more banks integrate model outputs into their decision making, the greater their exposure to model risk.

Regulators have singled out model risk for supervisory attention;[2] managers who have primary responsibility for their bank’s model development and implementation processes should be no less vigilant. This article summarizes the principles and procedures we follow to mitigate model risk on behalf of our clients.

The first source of model risk is basing decisions on incorrect output.  Sound judgment in the design stage and procedural discipline in the development phase are the best defenses against this eventuality. The key steps in designing a model to meet a given business need are determining the approach, settling on the model structure, and articulating the assumptions.

  • Selecting the approach means choosing the optimal level of granularity (for example, should the model be built at the loan or segment level).
  • Deciding on the structure means identifying the most suitable quantitative techniques (for example, should a decision tree, multinomial logistic, or deep learning model be used).
  • Stating the assumptions means describing both those that are related to the model structure (for instance, distribution of error terms) and those pertaining to the methodology (such as default expectations and the persistence of historical relationships over the forecast horizon).

Once the model is defined, the developers can progressively refine the model, critically subjecting it to rounds of robust testing both in and out of sample. They will make further adjustments until the model reliably produces plausible results.

Additionally, independent model validation teams provide a second opinion on the efficacy of the model.  Further model refinement might be required.  This helps to reduce the risk of confirmation bias on the part of the model developer.

This iterative design, development, and validation process reduces the first kind of risk by improving the likelihood that the final version will give decision makers solid information.

The second kind of model risk, misusing the outputs, can be addressed in the implementation phase. Risk managers learned the hard way in the financial crisis of 2007-2008 that it is vitally important for decision makers to understand—not just intellectually but viscerally—that mathematical modeling is an art and models are subject to limitations. The future may be unlike the past.  Understanding the limitations can help reduce the “unknown unknowns” and inhibit the misuse of model outputs.

Being aware of the potential for model risk is the first step. Acting to reduce it is the second. What hedges can you put in place to mitigate the risk?

First, design, develop, and test models in an open environment which welcomes objective opinions and rewards critical thinking.  Give yourself enough time to complete multiple cycles of the process to refine the model.

Second, describe each model’s inherent limitations, as well as the underlying assumptions and design choices, in plain language that makes sense to business executives and risk managers who may not be quantitatively or technologically sophisticated.

Finally, consider engaging an independent third party with the expertise to review your model documentation, audit your modeling process, and validate your models.

For information on how FRG can help you defend your firm against model risk, please click here.

[1] Federal Reserve and OCC, “Supervisory Guidance on Model Risk Management,” Attachment to SR Letter 11-07 (April 4, 2011), page 3. Emphasis added.

[2] See for example the Federal Reserve’s SR letters 15-8 and 12-17.

The Case for Outsourced Hosting

Middle office jobs are fascinating. In performance analysis, spotting dubious returns and tracing them back to questionable inputs requires insight that seems intuitive or innate but results in fact from a keen understanding of markets, asset classes, investment strategies, security characteristics, and portfolio dynamics. Risk management additionally calls for imagination in scenario forecasting, math and programming skills in model development, judgment in prioritizing and mitigating identified risks, and managerial ability in monitoring exposures that continually shift with market movements and the firm’s portfolio decisions. Few careers so completely engage such a wide range of talents.

Less rewarding is handling the voluminous information that feeds the performance measurement system and risk management models. Financial data management is challenging for small banks and investment managers, and it becomes more and more difficult as the business grows organically, adding new accounts, entering new markets, and implementing new strategies that often use derivatives. Not to mention the extreme data integration issues that stem from business combinations!

And data management hasn’t any upside: nobody in your chain of command notices when it’s going well, and everyone reacts when it fails.

Nonetheless, reliable data is vital for informative performance evaluation and effective risk management, especially at the enterprise level. It doesn’t matter how hard it is to collect, format, sort, and reconcile the data from custodians and market data services as well as your firm’s own systems (all too often including spreadsheets) in multiple departments. Without timely, accurate, properly classified information on all the firm’s long and short positions across asset classes, markets, portfolios, issuers, and counterparties, you can’t know where you stand. You can’t answer questions. You can’t do your job.

Adding up the direct, explicit costs of managing data internally is a straightforward exercise; the general ledger keeps track of license fees. The indirect, implicit costs are less transparent. For example, they include the portion of IT, accounting, and administrative salaries and benefits attributable to mapping data to the performance measurement system and the risk models, coding multiple interfaces, maintaining the stress testing environment, correcting security identifiers and input errors—all the time-consuming details that go into supporting the middle office. The indirect costs also include ongoing managerial attention and the potential economic impact of mistakes that are inevitable if your company does not have adequate staffing and well-documented, repeatable, auditable processes in place to support smooth performance measurement and risk management operations.

You can’t delegate responsibility for the integrity of the raw input data provided by your firm’s front office, portfolio assistants, traders, and security accountants. But you can outsource the processing of that data to a proven provider of hosting services. And then your analysts can focus on the things they do best—not managing data but evaluating investment results and enterprise risk.

Learn more about FRG’s Hosting Services here.

Spreadsheet Risk Is Career Risk

Stop and think: how much does your firm — and your work group — depend upon electronic spreadsheets to get mission-critical assignments done? How badly could a spreadsheet error damage your company’s reputation? Its financial results? Your own career?

Here’s an example. Advising Tibco Software on its sale to Vista Equity Partners, Goldman Sachs used a spreadsheet that overstated its client’s shares outstanding and, as a result, overvalued the company by $100 million. The Wall Street Journal reported, “It’s not clear who created the spreadsheet. Representatives for Tibco and Goldman declined to comment. Vista couldn’t be reached for comment.” (October 16, 2014.) Nonetheless, it’s safe to assume that the analyst who prepared the spreadsheet was identified, along with his or her manager, and that they both were penalized for the mistake.

Spreadsheets proliferate in financial organizations for good reasons. They offer convenient, flexible, and surprisingly powerful ad hoc solutions to all sorts of analytical problems. But as risk managers we are an impatient lot, and all too often results-oriented people like us turn to spreadsheets even for production applications because we cannot wait for IT resources to become available. We know that the IT department has a hard-and-fast policy of disavowing the business lines’ spreadsheets, but that’s all right, we tell ourselves, because “it’s only temporary.” Then we turn our attention to another problem….

Let’s take it as axiomatic that the firm’s risk management operations should not exacerbate the firm’s exposure to operational risk.

You may already have established some controls to mitigate spreadsheet risk in production applications. For example, key spreadsheets may be encrypted, stored on dedicated, non-networked PCs with password protection, and backed up every night. And it might be said that spreadsheets are self-documenting because the macros and formulas are visible and the functions are vendor-defined. As a practical matter, however, only the analyst who originally developed a spreadsheet fully understands it. When she leaves, and other analysts add enhancements — possibly with new names for existing variables — the spreadsheet becomes much more difficult to troubleshoot.

We recommend taking these steps now:

  • Starting in the risk management area, inventory all the spreadsheets in use across the firm’s operations.
  • Confirm that every time a spreadsheet enters a workflow it is identified as such. Cross-check the workflow documentation and swim lane diagrams against the spreadsheet inventory and update them where necessary.
  • Document every non-trivial spreadsheet, minimally including its purpose, the data sources, and any procedural tips.
  • Select the operationally embedded spreadsheets whose failure would be most injurious to departmental objectives and downstream processes, and look for permanent solutions with proper controls.

Whether or not it’s explicitly listed in your performance objectives, you owe it to your firm and yourself to migrate mission-critical spreadsheet applications to a reliable platform with codified controls. Systems development life cycle (SDLC) methodologies impose the discipline that’s needed in all phases of the project, from requirements analysis through deployment and maintenance, to minimize operational risk. This is not a trivial task; transferring the ad hoc functionality you currently have embedded in spreadsheets to a system that is well designed and amply supported takes heart. But the potential consequences of inaction are unacceptable. We strongly encourage you to take the necessary steps before a problem comes to light because a key person leaves the organization, a client spots a costly mistake, or — in the worst case — an operational crisis prevents the firm from meeting its contractual or regulatory obligations. And you lose your job.

 

Click here for information about FRG’s state-of-the-art risk modeling services and here for information about our hosting services.      

Subscribe to our blog!