Numbers Don’t Lie

Some years back, I came upon an article in the San Francisco Chronicle comparing major airlines’ on-time performance (1).  This comparison was based on data published by the US Department of Transportation on a monthly basis. The article listed the airlines having the best and poorest performance but without any explanation of why that might be. In particular, they ranked Alaska Airlines at the bottom of the list.  As I read the article I realized that the readers would make the erroneous conclusion that Alaska Airlines was doing a very poor job.  Why did I think this conclusion was erroneous?  Because Alaska Airlines flies out of cities prone to extreme weather conditions and these conditions would naturally cause delays.

Confounding Variables

In Statistics, confounding (i.e., lurking) variables are extraneous variables that correlate to the dependent or independent variable.  Failing to recognize these usually leads to erroneous causal conclusions.  In the case of the airlines’ on-time performance article, the article failed to mention that Alaska Airlines flies out of foggy airports.  Therefore, it is highly possible that it is not the airline contributing to the delay but, rather, the airport the airline is flying out of.  I wrote a letter to the editor explaining this (2).

Bottom line: watch out for those lurking variables. People can lie with numbers but numbers don’t lie!

(1) The San Francisco Chronicle article can be found here: http://articles.sfgate.com/2003-02-04/business/17476492_1_on-time-performance-american-airlines-flights
(2) The letter I published can be found here: http://www.sfgate.com/cgi-bin/article.cgi?f=/Chronicle/a/2003/02/23/BU164666.DTL

The Heroes of the Risk Quantification Process

What makes for a successful risk quantification process?  Prior to joining the firm I thought it was all about analytics (my own specialty).   I’ve come to realize that a happy marriage between data, analytics, and reporting needs to take place.  Each component brings a necessary piece to the risk puzzle of a portfolio.

But it goes beyond just that.  After working in all three areas I realized that the talented people who specialized in a particular domain were, in a sense, heroes.

The Unsung Hero

These are the people working with the data.  They are also, I believe, the lynchpin to the entire process.   The individuals who work in this area go through much effort (and frustration) to ensure the data being piped down the line is clean, coherent, relevant, and current.  This involves cool stuff like using data models and fancy acronyms like ETL.

What shocks me the most?  Few people truly recognize the importance of these individuals.  Especially if the data is clean and correct.  If it is dirty and incorrect, you know it in a hurry.

The Superhero

These are the people using statistics/mathematics to assess risk.  Much like superheroes with their utility belts or powers, people in this group have their own special tools.  These individuals use a host of nifty items to get a sense of the risk in the portfolio.  Value-at-Risk, regression models, time series analysis, copulas and other intimidating sounding, but extremely useful, tools are employed.

The Epic Hero

These are the people who take the data and analytics and build reports.  In literature, an epic hero is a person favored by the gods.  In this case, the “gods” can be one’s boss or upper management.  Individuals who do this well get praise upon praise upon praise.  The reason why: done correctly, nothing tells a better story than a picture…with pretty colors and nicely formatted numbers.

In Summary

There are three core pieces required to establish a solid risk quantification process.  And, if you are lucky enough to be working with heroes, then all sorts of insight about the risk in one’s portfolio can be obtained.

A parting suggestion: next time you want to praise the person who created your awesome report, do so.  Then follow the thumping sound – that will be the data person banging his head off the nearest wall.  Make sure you thank him too.

Time Series Data Curve Balls

Albert Pujols, one of the best baseball players in the game today, did not start the 2011 baseball season well.  Prior to 2011 he had a lifetime batting average (BA) of .331 – that’s almost one hit for every at bat!  However, much to the dismay of Cardinals fans (of which I am one) he had a meager .245 BA for the first month of the 2011 season.

When I first saw this statistic I wept. Had the great Pujols lost his mojo?

The Pitch

It occurs to me now, as I look back at my teary-eyed reaction, that I had fallen victim to a classic time series data (i.e., data measured over time) blunder.  I failed to consider the time period.

I believe that when practitioners work with time series data they must always keep in mind when the series begins and ends.  This is critical.  These boundaries directly influence what one is trying to measure.  Setting the wrong boundaries can result in biased estimators which in turn can give you faulty models and a very poor performance evaluation at the end of the year.

The Swing

Consider the following table of BAs for Albert Pujols:


The table shows that when the beginning and ending of the time series occurs greatly impacts the statistic.  Depending on which time period is chosen, one could argue that Mr. Pujols is worse than/roughly equal to/better than his lifetime BA.

Note: I included the best and worst 25 days to show that one can do some serious damage cherry-picking data.

Postgame Analysis

Sooooo……?  What data should be used? Some data?  All data?

From my experience, I have found that there is no universal and tidy answer to these questions.  I have learned that the correct time period is the one that best (legitimately) solves my problem.  If prudent thought goes into the decision then one, I believe, is on solid footing.

In the case of El Hombre’s BA?  The entire regular season would be the appropriate time period.  And his mojo?  He’s still got it – even during an “off” year.

 

I’d like to thank Sean at Sports Reference LLC (http://www.sports-reference.com/) for making my analysis so much easier.

The Basel Games

The Basel Games

As the 2012 summer Olympic Games descends upon London, England, national pride and attention grows around the world in anticipation of an elite few chasing international glory. Organization of such an international affair takes a great deal of leadership and planning. In 1894, Baron Pierre de Coubertin founded the International Olympic Committee. The IOC is the governing body of the Olympics and has since developed the Olympic Charter that defines its structure and actions. A similar comparison can be made in regards to the development of The Basel Accords. Established in 1974, The Basel Committee on Banking Supervision, comprised of central bankers from around the world, has taken a similar role as the IOC, but its objective “is to enhance understanding of key supervisory issues and improve the quality of banking supervision worldwide.”  The elite participants of “The Basel Games” include banks with international presence.  The Basel Accords themselves would be considered the Olympic Charter and its purpose was to create a consistent set of minimum capital requirements for banks to meet obligations and absorb unexpected losses. Although the BCBS does not have the power to enforce the accords, many countries have adopted their recommendations on banking regulations into law. To date there have been three accords developed.

The Bronze Metal

Our second runner up is Basel I, also known as the 1988 Basel Accord, which culminated as the result of the liquidation of the Herstatt Bank in 1974. With the development of technology and risk management techniques Basel I is considered obsolete by today’s standards. It did create a foundation for regulating systemic risk though. The primary objective was to curb credit risk for international banks. The Cooke-Ratio came to fruition at this time setting up the minimum required capital as a fixed percentage of assets weighted according to their nature. The Cooke-Ratio suggested that regulatory capital should be 8% of the risk weighted assets in order to handle unexpected losses. The nature of the assets is classified into five groups based on their liquidity and debtor’s credit rating. The Cooke-Ratio was advantageous in its simplicity, but Basel I had quite a few drawbacks. For example, the inability to incorporate market risk and only approximate operational risk into the calculation of the risk weighted assets and the fact that the Cooke-Ratio applies to all credit risk portfolios no matter how diverse it is. Basel I proved to be a great starting point for evaluating risk, but in June 2004 The Basel Committee introduced Basel II.

The Silver Metal

Pillar I

The New Basel Accord or Basel II began implementation in 2007 and takes home second place. Like the ancient pillars in Greece, the New Basel Accord is comprised of three pillars. The first pillar’s objective is to calculate the required 8% regulatory capital against their total risk weighted assets. What is important to note is that the Basel Committee incorporated market risk using value at risk and operational risk equal to a fixed percentage. Each major risk factor (credit, market, and operational) has a standardized approach and one or more advanced approaches. The standardized approaches use risk weights provided by the Basel Committee. Simplicity draws the most appeal to this approach.

 

Advanced approaches allow international banks to internally calculate their risk weighted assets in an effort to be as precise as possible when holding the minimum capital charge. There are two advanced approaches when calculating credit risk, Foundation Internal Ratings Base (IRB) and Advanced IRB.  Four elements are needed to calculate credit risk: probability of default, exposure at default, loss given default, and effective maturity. Under Foundation IRB, the bank can only calculate PD and the values for the other three are provided by the supervisor. Under the Advanced approach the bank has the freedom to calculate all four values. Market risk has only one advanced approach, the Internal Models Approach. In this approach, value at risk is calculated using a 10 day horizon, 99% confidence interval, and one year of data. Stress testing is then performed to determine the stability of the assets. The final risk, operational risk, has two advanced approaches: the Advanced Measurement Approach and Internal Measurement Approach. In these cases internal and external data and scenario analysis are used. Like credit risk, the second advanced approach to operational risk is allowed to be used only at the discretion of regulators.

Pillar II

The second pillar serves as the supervisory review process. The goal here is to ensure that banks are effectively maintaining the required capital needs based on the amount of risk they take on. The Basel Committee came up with four key principles for supervisory review. The first is to evaluate the bank’s process for assessing their risk and capital charge, management oversight and internal control, and reporting. The second is to assess the bank’s strategy for maintaining adequate capital levels. The third is the supervisor’s response, giving them the ability to hold banks to over the minimum capital required. Finally, supervisors have the ability to intervene to prevent banks from falling below the minimum levels.

Pillar III

Pillar III is a compliment to Pillars I and II by ensuring market discipline.     The way this is achieved is by deploying a set of disclosure requirements. For example, banks have to disclose their capital structure, strategies and processes, and the three areas of risk they are assessing. This creates a sense of transparency of the banks. By reporting on their exposures, investors will also use market discipline when investing.

 

The Gold Medal

The winner resulting from the deliberations of the Basel Committee is Basel III. Essentially, Basel III is a set of amendments to Basel II in response to the 2008 global financial crisis. The objective here is to increase capital requirements and incorporate bank leverage and liquidity when calculating risk. Although, the 8% capital requirement has not changed, the structure of the capital Tiers has changed. Tier 1 capital, capital that is more liquid than Tier 2, needs to constitute 6% of the capital requirement and the remaining 2% can be Tier 2 capital. Tier 3 capital will be removed completely. Basel II only required that Tier 2 assets not exceed the amount of Tier 1. Capital buffers are also proposed in Basel III. In this case a 2.5% increase in the minimum capital charge will be attributed to conservatism and another 2.5% increase for countercyclical times. It has also been proposed that “systemically important” banks will be subject to even higher capital requirements. A leverage ratio, a liquidity coverage ratio, and net stable funding ratio will also be introduced in an effort to scrutinize a bank’s ability to meet its financial obligations. Many other changes have been proposed. For a list of all the changes please refer to: Basel 3: higher capital requirements, liquidity rules, and transitional arrangements.

The Basel Accords, on top of the three-tiered rostrum in Basel, Switzerland, stand proud as the answers to global systemic risk for now. With an ever changing global economy it would be premature to announce the closing ceremonies, so onward the games continue.  The bronze medalist, Basel I, needs to be recognized for contributions to the foundation of analyzing and regulating global risk. Where Basel I fell short, the New Accord took the lead and won the silver medal with its incorporation of market and operational risk. Finally, reigning above all is Basel III; which further builds upon Basel II and takes the gold medal by restructuring the capital charge and incorporating bank leverage and liquidity into risk assessment.

 

Catching Up With Volatility

The last few years have seen significant volatility in the financial markets.  This has highlighted a basic issue with the popular simulation models used in financial institutions: models have a hard time catching up with volatility.

In other words, these models react slowly to changing volatility conditions, causing the risk metrics to be out of sync with the actual world, and potentially failing to predict significant losses with both business and regulatory repercussions.

It is not that the simulation methodologies are necessarily wrong.  Instead, it’s that they have limitations and work best under “normal” conditions.

Take, for example, historical simulation. Most banks for regulatory purposes use historical simulation to calculate VaR.  During this process a considerable amount of historical data is included to ensure a high confidence level for predicted values. Historical simulation, however, rests upon the assumption that there will be nothing new under the sun – an assumption that has been proving wrong since 2008.  Financial markets have – unfortunately – over the past three years set new records in volatility and free falling, at least in the context of the historical window typically included in such calculations.

Another popular simulation methodology is covariance-based Monte Carlo simulation where the covariance matrix is based on recent historical data. This is, again, limited by the events captured in the historical window and can also reduce the effects of extreme events. The covariance matrix can furthermore suffer from Simpson’s paradox: as correlations between risk factors during tumultuous times get reversed a covariance matrix based on those times can look like little or no correlation.

But there is help to be had:

If the issue is that the historical window is not including enough representative events, then the historical changes can be augmented with specific events or hypothetical scenarios. This might, however, require a more free-style method of simulation.

If issues arise because too much “irrelevant” history is included in the data used, thus drowning out the important events, then a shorter or more selective set of data can be used.

Choosing a shorter window can cause the confidence level of the results to decrease. However, if possible, switching to covariance-based Monte Carlo simulation can alleviate this effect and will not require more data.

If extreme events are either dominating or drowning in the covariance matrix, a solution might be to have multiple covariance matrices at hand and choose among them based on signal values in the data. This can also remedy issues with correlation reversal. Again, this should not require any new data.

A more costly, but also more accurate, method is to formulate statistical models for the risk factors. This allows for explicit modeling of volatility and how fast it should be incorporated in risk metrics.

Finally, choosing the methodology that is most appropriate for each risk factor is obviously an optimal approach if the resources are available.

We’re Not Perfect

Imagine the Masters has finished in a tie: Tiger Woods and Phil Mickelson are heading to a playoff. All Phil needs to win is a putt. You rush over to the next hole to find the perfect spot in case he misses. Then, suddenly, you hear the roar of thousands of people. Phil has just earned another green jacket! "What was I thinking?!" you mumble to yourself as you head home.

What explains peoples' inevitable deviation from rational thought? According to the author of Against the Gods, Peter Bernstein, these deviations can be explained by decision regret, endowment effect, and myopia In fact all are applicable, in some form, to the most rational investors.

David Bell explains that decision regret is "the result of focusing on the assets you might have had if you had made the right decision." From an investor's standpoint, for example, decision regret comes from selling stock and watching it sky rocket soon after. This, then, promotes the irrational behavior of selling low and buying high.

The endowment effect is another human flaw that leads to irrational behavior. Richard Thaler defines this phenomenon as "our tendency to set a higher selling price on what we own than what we would pay for the identical item if we did not own it." This makes sense as irrational because an investor’s price to sell is different than his price to buy – whether you own it or not should not matter.

The final human flaw is myopic sight: not being able to see far enough into the future to make rational decisions. This concept is of particular importance to investors in volatile stock markets, mainly because stocks do not have a maturity date. A volatile stock market is an environment that, Bernstein states, is "nothing more than bets on the future, which is full of surprises."

Ever since Daniel Bernoulli's thoughts on utility and risk aversion in the 18th century, behavioral economics has been studying irrational behavior. It is understood that not all investors will follow the same rational model. If they did, then everyone’s investment portfolios would look exactly the same. There has to be winners and losers in investing. However, if irrational thought can be deterred then one just might catch the game winning putt.

Coming Down the Assembly Line: Automated Form PF Reporting

Regulate Your Risk…GET A CAR!
Commuters who use public transportation encounter the risk of being tardy on a daily basis. There are so many factors unaccounted for that the probability of arriving on time is not in their favor. What to do? How can they eliminate the chance of, say, a subway breakdown or late bus?

An answer: they can buy a car.

A similar parallel can be drawn to the 2008 financial crisis. In response to the crisis, the Dodd-Frank Act established the Financial Stability Oversight Council (FSOC). This council’s mission is to monitor and respond to systemic risks affecting financial markets in the United States. How can the FSOC hedge their risk of being too late to a financial crisis that would crush the economy?

An answer: Form PF.

The process of automating Form PF reporting is a daunting task that will soon come to fruition thanks to the collaborative efforts of The Financial Risk Group and ConceptOne. It starts with the building blocks of data validation and ends with a final product ready for analysis by the FSOC. Within this progression, certain data serves as a direct input to the report while other data is used to calculate output data that will be implemented into the report. An added feature includes tracking the data used to answer the questions on Form PF. This feature is especially important for auditing purposes. The process has already begun rolling down the assembly line. While most private fund advisers will need to begin filing for their fiscal year or fiscal quarter ending on or after December 15, 2012, some funds will need to begin as soon as June 15, 2012.

Those that qualify for the June 15, 2012 date are advisers with at least $5 billion in assets under management (AUM) attributable to hedge funds or private equity funds and liquidity fund advisers with $5 billion in AUM, attributable to liquidity funds and registered money market funds.

Much like car manufactures who build specific models to cater to customers’ needs sections of Form PF are designated to qualifying advisers. All SEC-registered advisers with at least $150 million in private fund AUM must file. Those with less than $1.5 billion in hedge funds, $1 billion in liquidity funds and registered money market funds, or $2 billion in private equity funds are considered small. These advisers must file only once a year and within 120 days of the end of their fiscal year. Information provided by “small” private fund advisers is significantly less in comparison to “large” private fund advisers. Large hedge fund advisers must file within 60 days of the end of each fiscal quarter, large liquidity fund advisers within 15 days, and large private equity fund advisers within 120 days.

Safety First
The key to safety is having a structurally sound foundation. For a car, that would be a solid chassis. For the automated Form PF design, that would be staging tables. Much like the chassis of a car, properly thought out staging tables provide the framework for the rest of the process. These staging tables show us where to put data in order for the process to be as streamlined as possible. Luckily for us, the structure of these tables was provided by ConceptONE.

To illustrate the value of staging tables, let’s assume that a staging table missing a column is the same as a chassis missing the proper door mounts. In the case of the car, the assembly would stop there. The chassis would be noted as an exception and the proper course of action would be taken to fix the problem. Staging tables provide a similar function. If the next step in the process is to pull the data from the staging table but the column doesn’t exist, the code creates an exception report and stops execution.

“Take me home, country roads”
Cars don’t always have the luxury of smooth roads, just as we in the financial risk industry don’t always have the luxury of valid data. Until that glorious day when we do have valid data, something needs to be created to smooth out those “potholes” of data. Just as employees on the assembly line bolt on the suspension, the Form PF process bolts on validation.

Form PF’s validation is relatively complex: it requires validation on multiple inputs that may vary depending on the desired report. The backbone of the Form PF validation is metadata (provided by ConceptONE ). This information defines the required data and provides a list of valid inputs. Utilizing this information allows the process to accurately validate data and create exception reports in case “potholes” of data are encountered. Suspension helps keep a car running smooth on bumpy roads; validation helps keep the process running smooth when invalid data is encountered.

Vroom… Vroom…
The critical element in a car is the engine; for the Form PF process it is calculations. Just as an engine cannot function without gas, calculations cannot function without data. Based on specific questions, data is pulled from the staging tables into working tables where the calculations are performed. Like a computer chip in a car that regulates fuel consumption (to comply with regulations), a collection of calculation rules for Form PF serves the same purpose. The rules further subset the data to specify exactly which variables should be calculated and how they should be calculated for optimal Form PF reporting performance.

That New Car Smell
A car on the assembly line would probably be hard to recognize for any average Joe until it gets its body panels put on. The same can be said about the Form PF automation. If one were to look at the code and calculations, one would have a tough time guessing what the final product would look like. That’s where a nice and shiny output report comes in to play. Different sections of the Form PF report have to be filled out based on the type and size of the private fund adviser, just like different body panels are used for different models of a car.

While body panels are generally large and basic, the real details occur within the interior of the car. The output reports are comprised of the same idea. For example, direct input data (e.g., the seats of a car) is something that is pulled from a specific field multiple times that was never run through calculations and will probably never change. This can be information such as identification numbers or addresses. What about options like a sleek CD player or fancy navigation system? Just like these options, the output reports show information from specific calculations based on the desired result. There are plenty of ways to get a result, but only one way to get to the result that you need. And let’s be honest, you want that fancy navigation system.

“But why do they put the guarantee on the box?”
Source tracking plays an important role in the automation of Form PF reporting. Fortunately, source tracking is why customers keep coming back to the dealership (The Financial Risk Group / ConceptOne). It’s the “warranty” that ensures accuracy of the data in the staging tables loaded through the standardized data loader. Source tracking provides multiple reports that display input tables while also highlighting the pertinent columns associated with particular questions on Form PF.

The necessity of source tracking is especially evident when auditing data. Much like unknown malfunctions in a car, unknown errors can arise in data while updating, storing, and using it to complete Form PF. Source tracking provides accessibility to input data to help avoid fines and other potential punishments. Internal auditing is also made easier through source tracking. The “warranty” provides bumper to bumper coverage for as long as Form PF reports park in the Investment Adviser Registration Depository garage.

Subscribe to our blog!