Top 6 Things To Consider When Creating a Data Services Checklist

“Data! Data! Data! I can’t make bricks without clay.”
— Sherlock Holmes, in Arthur Conan Doyle’s The Adventure of the Copper Beeches

You should by now have a solid understanding of the growth of and history of data, data challenges and how to effectively manage themwhat data as a service (DaaS) is, how to optimize data using both internal and  external data sources, and the benefits of using DaaS. In our final post of the series, we will discuss the top six things to consider when creating a Data Services strategy.

Let’s break this down into two sections: 1) pre-requisites and 2) the checklist.

Prerequisites

We’ve identified four crucial points below to consider prior to starting your data services strategy. These will help frame and pull together the sections of information needed to build a comprehensive strategy to move your business towards success.

Prerequisites:

1: View data as a strategic business asset

 In the age of data regulation including BCBS 239 principles for effective risk data aggregation and risk reporting, GDPR and others, data, especially that relating to an individual, is considered an asset that must be managed and protected. It also can be aggregated, purchased, traded and legally shared to create bespoke user experiences and engage in more targeted business decisions. Data must be classified and managed with the appropriate level of governance in the same vein as other assets, such as people, processes and technology. Being in this mindset and appreciating the value of data and recognizing that not all data is alike and must be manged appropriately will ultimately ensure business success in a data-driven world.

2: Ensure executive buy-in, senior sponsorship and support

As with any project, having executive buy-in is required to ensure top down adoption. However, partnering with business line executives who create data and are power users of it can help champion its proper management and reuse in the organization. This assists in achieving goals and ensuring project and business success. The numbers don’t lie: business decisions should be driven by data.

3: Have a defined data strategy and target state that supports the business strategy

Having data for the sake of it won’t provide any value; rather, a clearly-defined data strategy and target state which outlines how data will support the business will allow for increased user buy in and support. This strategy must include and outline:

  • A Governance Model
  • An Organization chart with ownership, roles and responsibility, and operations; and
  • Goals for data accessibility and operations (or data maturity goals)

If these sections are not agreed from the start, uncertainty, overlapping responsibilities, duplication of data and efforts as well as regulatory or potentially legal issues may arise.

4: Have a Reference Data Architecture to Demonstrate where Data Services Fit

Understanding the architecture that supports data and data maturity goals, including the components that are required to support the management of data from acquisition through distribution and retirement is critical. It is also important to understand how they fit into the overall architecture and infrastructure of the technology at the firm.  Defining a clear data architecture and its components including:

  • Data model(s)
  • Acquisition
  • Access
  • Distribution
  • Storage
  • Taxonomy

are required prior to integration of the data.

5. Data Operating Model – Understanding how the Data Transverses the Organization

It is crucial to understand the data operations and operating model – including who does what to the data and how the data ownership changes over time or transfers among owners. Data lineage is key – where your data came from, its intended use, who has/is allowed to access it and where it goes inside or outside the organization – to keep it clean and optimize its use. Data quality tracking, metrics and remediation will be required.

Existing recognized standards such as the Global Legal Entity Identifier (LEI) that can be acquired and distributed via data services can help in the sharing and reuse of data that is ‘core’ to the firm. They can also assist in tying together data sets used across the firm.

Checklist/Things to Consider

Once you’ve finished the requirements gathering and understand the data landscape, including roles and responsibilities described above, you’re now ready to begin putting together your data services strategy. To build an all-encompassing strategy, the experts suggest inclusion of the following.

1:  Defined Data Services Required

  •  Classification: core vs. business shared data and ownership
    • Is everyone speaking a common language?
    • What data is ‘core’ to the business, meaning it will need to be commonly defined and used across the organization?
    • What data will be used by a specific business that may not need to be uniformly defined?
    • What business-specific data will be shared across the organization, which may need to be uniformly defined and might need more governance?
  • Internal vs external sourcing
    • Has the business collected or created the data themselves or has it been purchased from a 3rd party? Are definitions, metadata and business rules defined?
    • Has data been gathered or sourced appropriately and with the correct uniform definitions, rules, metadata and classification, such as LEI?
  • Authoritative Data Sources for the Data Services
    • Have you documented where, from whom, when etc. the data was gathered (from Sources of Record or Sources of Origin)? For example, the Source of Origin might be a trading system, an accounting system or a payments system. The general ledger might be the Source of Record for positions.
    • Who is the definitive source (internal/external)? Which system?
  • Data governance requirements
    • Have you adhered to the proper definitions, rules, and standards set in order to handle data?
    • Who should be allowed to access the data?
    • Which applications (critical, usually externally facing) applications must access the data directly?
  • Data operations and maintenance
    • Have you kept your data clean and up to date?
    • Are you up to speed with regulations, such as GDPR, and successfully obtained explicit consent for the information?
    • Following your organization chart and rules and requirements detailed above, are the data owners known, informed and understand they are responsible for making sure their data maintains its integrity?
    • Are data quality metrics monitored with a process to correct data issues?
    • Do all users with access to the data know who to speak to if there is a data quality issue and know how to fix it?
  • Data access, distribution and quality control requirements
    • Has the data been classified properly? Is it public information? If not, is it restricted to those who need it?
    • Have you defined how you share data between internal/external parties?
    • Have the appropriate rules and standards been applied to keep it clean?
    • Is there a clearly defined process for this?
  • Data integration requirements
    • If the data will be merged with other data sets/software, have the data quality requirements been met to ensure validity?
    • Have you prioritized the adoption of which applications must access the authoritative data distributed via data services directly?
    • Have you made adoption easy – allowing flexible forms of access to the same data (e.g., via spreadsheets, file transfers, direct APIs, etc.)?

2: Build or Acquire Data Services

 To recap, are you building or acquiring your own Data Services? Keep in mind the following must be met and adhere to compliance:

  • Data sourcing and classification, assigning ownership
  • Data Access and Integration
  • Proper Data Services Implementation, access to authoritative data
  • Proper data testing, and data remediation, keeping the data clean
  • Appropriate access control and distribution of the data, flexible access
  • Quality control monitoring
  • Data issue resolution process

The use and regulations around data will be constantly evolving as will the number of users data can support in business ventures. We hope that this checklist will provide a foundation towards building and supporting your organization’s data strategies. If there are any areas you’re unclear on, don’t forget to take a look back through our first five blogs which provide more in-depth overviews on the use of data services to support the business.

Thank you for tuning into our first blog series on data management. We hope that you found it informative but most importantly useful towards your business goals.

If you enjoyed our blog series or have questions on the topics discussed, write to us on Twitter@FRGRISK.

Dessa Glasser is a Principal with the Financial Risk Group, and an independent board member of Oppenheimer & Company, who assists Virtual Clarity, Ltd. on data solutions as an Associate. 

 

RELATED:

Data Is Big, Did You Know?

Data Management – The Challenges

Data as a Service (DaaS) Solution – Described

Data as a Service (DaaS) Data Sources – Internal or External?

Data as a Service (DaaS) – The Benefits

Is Your Business Getting The Full Bang for Its CECL Buck?

Accounting and regulatory changes often require resources and efforts above and beyond “business as usual”, especially those like CECL that are significant departures from previous methods. The efforts needed can be as complex as those for a completely new technology implementation and can take precedence over projects that are designed to improve your core business … and stakeholder value.

But with foresight and proper planning, you can prepare for a change like CECL by leveraging resources in a way that will maximize your efforts to meet these new requirements while also enhancing business value. At Financial Risk Group, we take this approach with each of our clients. The key is to start by asking “how can I use this new requirement to generate revenue and maximize business performance?”

 

The Biggest Bang Theory

In the case of CECL, there are two significant areas that will create the biggest institution-wide impact: analytics and data governance. While the importance of these is hardly new to financial institutions, we are finding that many neglect to leverage their CECL data and analytics efforts to create that additional value. Some basic first steps you can take include the following.

  • Ensure that the data utilized is accurate and that its access and maintenance align to the needs and policies of your business. In the case of CECL these will be employed to create scenarios, model, and forecast … elements that the business can leverage to address sales, finance, and operational challenges.
  • For CECL, analytics and data are leveraged in a much more comprehensive fashion than previous methods of credit assessment provided.  Objectively assess the current state of these areas to understand how the efforts being put toward CECL implementation can be leveraged to enhance your current business environment.
  • Identify existing available resources. While some firms will need to spend significant effort creating new processes and resources to address CECL, others will use this as an opportunity to retire and re-invent current workflows and platforms.

Recognizing the business value of analytics and data may be intuitive, but what is often less intuitive is knowing which resources earmarked for CECL can be leveraged to realize that broader business value. The techniques and approaches we have put forward provide good perspective on the assessment and augmentation of processes and controls, but how can these changes be quantified? Institutions without in-house experienced resources are well advised to consider an external partner. The ability to leverage expertise of staff experienced in the newest approaches and methodologies will allow your internal team to focus on its core responsibilities.

Our experience with this type of work has provided some very specific results that illustrate the short-term and longer-term value realized. The example below shows the magnitude of change and benefits experienced by one of our clients: a mid-sized North American bank. A thorough assessment of its unique environment led to a redesign of processes and risk controls. The significant changes implemented resulted in less complexity, more consistency, and increased automation. Additionally, value was created for business units beyond the risk department. While different environments will yield different results, those illustrated through the methodologies set forth here provide a good example to better judge the outcome of a process and controls assessment.

 

 Legacy EnvironmentAutomated Environment
Reporting OutputNo daily available manual controls for risk reportingDaily in-cycle reporting controls are automated with minimum manual interaction
Process SpeedCredit run 40+ hours
Manually-input variables prone to mistakes
Credit run 4 hours
Cycle time reduced from 3 days to 1 for variable creation
Controls & AuditMultiple audit issues and Regulatory MRAsAudit issues resolved and MRA closed
Model ExecutionSpreadsheet driven90 models automated resulting in 1,000 manual spreadsheets eliminated

 

While one approach will not fit all firms, providing clients with an experienced perspective on more fully utilizing their specific investment in CECL allows them to make decisions for the business that might otherwise never be considered, thereby optimizing the investment in CECL and truly ensuring you receive the full value from your CECL buck.

More information on how you can prepare for—and drive additional value through—your CECL preparation is available on our website and includes:

White Paper – CECL: Why the expectations are different

White Paper – CECL Scenarios: Considerations, Development and Opportunities

Blog – Data Management: The Challenges

Data as a Service (DaaS) Solution – Described

Data as a Service (DaaS) can be used to provide a single source of authoritative (or golden) data for use in a firm’s critical applications. Here, a logical layer of the data (often in-memory for quick access) can serve up data that has been verified, defined, and described with metadata from source systems. This provides data that is readily understood and has unique and unambiguous meaning with the context in which these data is known and used.

Source systems can be tapped in real time to ensure that all changes are accurately and immediately represented in the data service.  This source system can be internal or external to the firm, depending on the need by the receiving party.

The authoritative data can then be served up to multiple users at the same time, delivered in a format that they prefer (e.g., file transfer, online access, download into other systems or spreadsheets), giving them quicker access to information in a format that they can readily use.

By cleaning the data, describing it and distributing it from a central logical location to users and applications, data quality checks can be performed and efficiencies gained. Given that ‘all eyes’ are on the same data, any data quality issues are quickly identified and resolved.

DaaS offers the flexibility to provide access to both internal and external data in an easily consumable form. Access to a multitude of authoritative data in a consistent format can be extremely useful in timely delivery of new applications or reporting, including regulatory reports, and will be quicker than waiting for a single physical source for this data to be built.

This is particularly useful when data are needed by multiple parties and when data is ‘siloed’ in an organization. How many versions are there? How many platforms? Don’t forget, data generation has vast potential.

The more complex your data needs, the more likely that a DaaS solution will benefit you.

Dessa Glasser is a Principal with the Financial Risk Group, and an independent board member of Oppenheimer & Company, who assists Virtual Clarity, Ltd. on data solutions as an Associate. 

Questions? Comments? Talk to us on Twitter @FRGRisk

Related:

Data Management – The Challenges

Does your company suffer the challenges from data silos? Dessa Glasser, Principal with the Financial Risk Group, who assists Virtual Clarity on data solutions as an Associate, discusses the challenges of data management in our second post for our blog series.

In our previous blog, we talked about the need for companies to get a handle on their data management. This is tough, but necessary. As companies develop – as they merge and grow and as more data becomes available to them in multiple forms – data silos occur, making it difficult for a ‘single truth’ of data to emerge. Systems and data are available to all , but often behavior among teams are different, including the ‘context’ in which the data is used. Groups have gathered and enhanced their own data to support their business, making it difficult to reconcile and converge to a single source for business critical data.

This complication is magnified because:

  • New technology brings in large amounts of data – both structured and unstructured
  • Each source has its own glossary of terms, definitions, metadata, and business rules
  • Unstructured data often needs tagging to structured data to assist firms in analytics
  • Structured and unstructured data require metadata to interpret the data and its context

As Dessa Glasser notes, “The problem is not getting the data, the problem is processing, interpreting and understanding the data.”

Companies can also be hindered by the ‘do it yourself’ mentality of their teams, whereby individuals who want systems implemented immediately will often construct a process and data themselves, rather than waiting for IT to deliver it, which either takes time or may not be not available on a timely basis.

 These cross-over efforts undermine a firm’s ability to effectively use the data and often leads to:

  • Data sources being available in multiple forms – both internal and external
  • The costly and manual reconciliation of incorrect data and difficulty aggregating data
  • The inability to generate business insights from the data – more time is spent processing, and reconciling the data, rather than analyzing it

Meanwhile, clients are demanding a holistic view of the services they’re buying into, and management and regulators, when they ask for data, want to know the full relationship with clients across the firm and a holistic view of all aggregated risk positions, which is hard to pull together from numerous teams who work with and may interpret the data differently. Companies must present a cohesive front, regardless of each team’s different procedures or context in which the data is used.

All of the above are prime examples of why the governance and management of data is essential. The end goal is one central, logical, authoritative source for all critical data for a company. It is important to treat data as a business asset and ensure the timely delivery of both well-defined data and metadata to the firm’s applications and business users. This can be done by developing a typical data warehouse to serve up the data, which often can take years to build. However, this can also be facilitated more quickly by leveraging advances in technologies, such as the cloud, data access and management tools, and designing a Data as a Service (DaaS) solution within a firm.

So, how to go about it?

Tune in next month to blog 3 where we’ll discuss.

Dessa Glasser is a Principal with the Financial Risk Group, and an independent board member of Oppenheimer & Company, who assists Virtual Clarity, Ltd. on data solutions as an Associate. 

Questions? Comments? Talk to us on Twitter @FRGRisk

Related:

Current Expected Credit Loss (CECL) a New Paradigm for Captives, Too

The ramifications of CECL on Financial Institutions has in large part focused on Banks, but as we addressed in a recent paper, “Current Expected Credit Loss: Why the Expectations Are Different,” this new accounting treatment extends to a much larger universe.  An example of this are the captives that finance American’s love affair with cars; their portfolios of leases and loans have become much larger and the implications of CECL more significant.

As with other institutions, data, platforms, and modeling make up the challenges that captives will have to address.  But unlike other types of institutions captives have more concentrated portfolios, which may aid in “pooling” exercises, but may be inadvertently affected by scenario modeling.  A basic tenet for all institutions is the life-of-loan estimate and the use of reasonable and supportable forecasts.  While some institutions may have had “challenger” models in the past that moved in this direction, captives have not tended to utilize this type of approach in the past.

The growth of captives portfolios and the correlation to a number of macro-economic factors (e.g. interest rates, commodity prices, tariffs, etc.) call for data and scenarios that require a different level of modeling and forecasting.  Because FASB does not provide template methodologies or calculations it will be necessary to develop these scenarios with the mindset of the “reasonable and supportable” requirement.  While different approaches will likely be adopted, those that utilize transaction level data have the ability to provide a higher level of accuracy over time, resulting in the goals laid out in the new guidelines.  As might be imagined the ability to leverage experience in the development and deployment of these types of models can’t be overemphasized.

We have found that having the ability to manage the following functional components of the platform are critical to building a flexible platform that can manage the changing needs of the users:

  • Scenario Management
  • Input Data Mapping and Registry
  • Configuration Management
  • Model Management

Experience has taught that there are significant considerations in implementing CECL, but there are also some improvements that can be realized for institutions that develop a well-structured plan. Captives are advised to use this as an opportunity to realize efficiencies, primarily in technology and existing models. Considerations around data, platforms, and the models themselves should leverage available resources to ensure that investments made to address this change provide as much benefit as possible, both now and into the future.

Forecasting Capital Calls and Distributions

Early in his career, one of us was responsible for cash flow forecasting and liquidity management at a large multiline insurance company. We gathered extensive historical data on daily concentration bank deposits, withdrawals, and balances and developed an elementary but fairly effective model. Because insurance companies receive premium payments from and pay claims to many thousands of individuals and small companies, we found we could base reasonably accurate forecasts on the quarter of the year, month of the quarter, week of the month, and day of the week, taking holidays into account. This rough-and-ready approach enabled the money market traders to minimize overnight balances, make investment decisions early in the morning, and substantially extend the average maturity of their portfolios. It was an object lesson in the value of proactive cash management.

It is not such a trivial matter for investors in private capital funds to forecast the timing and amount of capital calls and distributions. Yet maintaining adequate liquidity to meet obligations as they arise means accepting either a market risk or an opportunity cost that might be avoided. The market risk comes from holding domestic large-cap stocks that will have to be sold quickly, whatever the prevailing price, when a capital commitment is unexpectedly drawn down; the opportunity cost comes from adopting a defensive posture and holding cash or cash equivalents in excess of the amount needed for ongoing operations, especially when short-term interest rates are very low.

FRG is undertaking a financial modeling project aimed at forecasting capital calls and distributions. Our overall objective is to help investors with outstanding commitments escape the unattractive alternatives of holding excess cash or scrambling to liquidate assets to meet contractual obligations whose timing and amount are uncertain. To that end, we seek to assist in quantifying the risks associated with allocation weights and to understand the probability of future commitments so as to keep the total portfolio invested in line with those weights.

In other words, we want to make proactive cash management possible for private fund investors.

As a first step, we have formulated some questions.

  1. How do we model the timing and amount of capital calls and disbursements? Are there exogenous variables with predictive power?
  2. How do the timing of capital calls and disbursements correlate between funds of different vintages and underlying types (e.g., private equity from venture capital to leveraged buyouts, private credit, and real estate, among others)?
  3. Do private funds’ capital calls and distributions correlate with public companies’ capital issuance and dividend payout decisions?
  4. How do we model the growth of invested capital? What best explains the returns achieved before money is returned to LPs?
  5. What triggers distributions? 
  6. How do we allocate money to private funds keeping an eye on total invested capital vs. asset allocation weights?
    1. The timing of capital calls and distributions is probabilistic (from #1). 
    2. Diversification among funds can produce a smooth invested capital profile.  But we need to know how these funds co-move to create distributions around that profile (from #2).
    3. Confounding problem is the growth of invested capital (from #3).  This growth affects total portfolio value and the asset allocation weights.  If total exposure is constrained, what is the probability of breaching those constraints?

We invite front-line investors in limited partnerships and similar vehicles to join the discussion. We would welcome and appreciate your input on the conceptual questions. Please contact Dominic Pazzula at info@frgrisk.com if you have an interest in this topic.

Turning a Blind Eye to the Risky Business of Incentive-based Sales Practices 

Should you be monitoring your sales activities to detect anomalous behaviors?

The use of sales incentives (commissions, bonuses, etc.) to motivate the behavior of salespeople has a long history in the United States.  We all hope to assume the initial structuring of incentive-based pay is not intended to have nefarious or abusive impacts on its customers but, in a number of recent and well-publicized stories of mistreatment of both customers and customer information, we have discovered that these negative consequences do exist.  Likely, the business practice of turning an administrative blind eye to the damage done to consumers as a result of these sales incentive programs has played an even greater role in the scale of abuse that has been uncovered over the last decade.  In the most recent cases of unchecked and large-scale customer abuse, with particular attention focused on the financial services industry, this business paradigm of tying employee benefits (defined as broadly tying employment and/or income potential to sales) were resolved through arbitration and frequently typecast as “a cost of doing business”.

Today, are you putting your business, and all those associated with its success at risk by turning a blind eye to the effects of your business practices, including your employee incentive programs?  There are new consequences being laid on to corporate leaders and board members for all business practices used by the company, and the defense of not knowing the intricacies and results of these practices does not protect you from these risks.

We have developed a methodology to detect both customer sales and individual product behaviors that are indicative of problematic situations that require additional examination.  Our methodology goes beyond the aggregate sales, which are primarily discussed in the literature, to highlight individuals and/or groups that are often obviated when analyzing such data.

A forthcoming  paper, “Sales Practices: Monitoring Sales Activity for Anomalous Behaviors” will explore these issues, and a resolution, in depth. Visit any of our social media channels for the link.

 

 

 

IFRS 9: Modeling Challenges

Calculating expected credit losses under IFRS 9 is easy. It requires little more than high school algebra to determine the aggregate present value of future cash flows. But it is not easy to ascertain the key components that are used by the basic equation—regardless whether the approach taken is “advanced”  (i.e., where PD, LGD, and EAD are modeled) or ”simplified” (also called “intermediate”). The forward-looking stance mandated by IFRS 9 makes the inherently difficult process of specifying these variables all the more complex.

For the sake of brevity, let’s consider only the advanced approach for this discussion. There are two immediate impacts on PD model estimation: the point-in-time requirements and the length of the forecast horizon.

PD estimates need to reflect point-in-time (PIT) rather than through-the-cycle (TTC) values. What this means is that PDs are expected to represent the current period’s economic conditions instead of some average through an economic cycle. Bank risk managers will have to decide whether they can adapt a CCAR (or other regulatory) model to this purpose, determine a way to convert a TTC PD to a PIT PD, or build an entirely new model.

The length of the forecast horizon has two repercussions. First, one must consider how many models to build for estimating PDs throughout the forecast. For example, it may be determined that a portfolio warrants one model for year 1, a second model for years 2 to 3, and a third model for years 3+. Second, one should consider how far into the forecast horizon to use models. Given the impacts of model risk, along with onus of maintaining multiple models, perhaps PDs for a horizon greater than seven years would be better estimated by drawing a value from some percentile of an empirical distribution.

Comparatively speaking, bank risk managers may find it somewhat less difficult to estimate LGDs, especially if collateral values are routinely updated and historical recovery rates for comparable assets are readily available in the internal accounting systems. That said, IFRS 9 requires an accounting LGD, so models will need to be developed to accommodate this, or a process will have to be defined to convert an economic LGD into an accounting one.

Projecting EADs is similarly challenging. Loan amortization schedules generally provide a valid starting point, but unfortunately they are only useful for installment loans. How does one treat a revolving exposure? Can one leverage, and tweak, the same rules used for CCAR? In addition, embedded options have to be taken into account. There’s no avoiding it: estimating EADs calls for advanced financial modeling.

As mentioned above, there are differences between the requirements of IFRS 9 and those of other regulatory requirements (e.g., CCAR). As a result, the models that banks use for stress testing or other regulatory functions cannot be used as-is for IFRS 9 reporting. Bank risk managers will have to decide, then, whether their CCAR models can be adapted with relatively minor modifications. In many cases they may conclude that it makes more sense to develop new models. Then all the protocols and practices of sound model design and implementation come into play.

Of course, it is also important to explain the conceptual basis and present the supporting evidence for PD, LGD, and EAD estimates to senior management—and to have the documentation on hand in case independent auditors or regulatory authorities ask to see it.

In short, given PD, LGD, and EAD, it’s a trivial matter to calculate expected credit losses. But preparing to comply with the IFRS 9 standard is serious business. It’s time to marshal your resources.

Managing Model Risk

The Federal Reserve and the OCC define model risk as “the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.”[1]  Statistical models are the core of stress testing and credit analysis, but banks are increasingly using them in strategic planning. And the more banks integrate model outputs into their decision making, the greater their exposure to model risk.

Regulators have singled out model risk for supervisory attention;[2] managers who have primary responsibility for their bank’s model development and implementation processes should be no less vigilant. This article summarizes the principles and procedures we follow to mitigate model risk on behalf of our clients.

The first source of model risk is basing decisions on incorrect output.  Sound judgment in the design stage and procedural discipline in the development phase are the best defenses against this eventuality. The key steps in designing a model to meet a given business need are determining the approach, settling on the model structure, and articulating the assumptions.

  • Selecting the approach means choosing the optimal level of granularity (for example, should the model be built at the loan or segment level).
  • Deciding on the structure means identifying the most suitable quantitative techniques (for example, should a decision tree, multinomial logistic, or deep learning model be used).
  • Stating the assumptions means describing both those that are related to the model structure (for instance, distribution of error terms) and those pertaining to the methodology (such as default expectations and the persistence of historical relationships over the forecast horizon).

Once the model is defined, the developers can progressively refine the model, critically subjecting it to rounds of robust testing both in and out of sample. They will make further adjustments until the model reliably produces plausible results.

Additionally, independent model validation teams provide a second opinion on the efficacy of the model.  Further model refinement might be required.  This helps to reduce the risk of confirmation bias on the part of the model developer.

This iterative design, development, and validation process reduces the first kind of risk by improving the likelihood that the final version will give decision makers solid information.

The second kind of model risk, misusing the outputs, can be addressed in the implementation phase. Risk managers learned the hard way in the financial crisis of 2007-2008 that it is vitally important for decision makers to understand—not just intellectually but viscerally—that mathematical modeling is an art and models are subject to limitations. The future may be unlike the past.  Understanding the limitations can help reduce the “unknown unknowns” and inhibit the misuse of model outputs.

Being aware of the potential for model risk is the first step. Acting to reduce it is the second. What hedges can you put in place to mitigate the risk?

First, design, develop, and test models in an open environment which welcomes objective opinions and rewards critical thinking.  Give yourself enough time to complete multiple cycles of the process to refine the model.

Second, describe each model’s inherent limitations, as well as the underlying assumptions and design choices, in plain language that makes sense to business executives and risk managers who may not be quantitatively or technologically sophisticated.

Finally, consider engaging an independent third party with the expertise to review your model documentation, audit your modeling process, and validate your models.

For information on how FRG can help you defend your firm against model risk, please click here.

[1] Federal Reserve and OCC, “Supervisory Guidance on Model Risk Management,” Attachment to SR Letter 11-07 (April 4, 2011), page 3. Emphasis added.

[2] See for example the Federal Reserve’s SR letters 15-8 and 12-17.

The Case for Outsourced Hosting

Middle office jobs are fascinating. In performance analysis, spotting dubious returns and tracing them back to questionable inputs requires insight that seems intuitive or innate but results in fact from a keen understanding of markets, asset classes, investment strategies, security characteristics, and portfolio dynamics. Risk management additionally calls for imagination in scenario forecasting, math and programming skills in model development, judgment in prioritizing and mitigating identified risks, and managerial ability in monitoring exposures that continually shift with market movements and the firm’s portfolio decisions. Few careers so completely engage such a wide range of talents.

Less rewarding is handling the voluminous information that feeds the performance measurement system and risk management models. Financial data management is challenging for small banks and investment managers, and it becomes more and more difficult as the business grows organically, adding new accounts, entering new markets, and implementing new strategies that often use derivatives. Not to mention the extreme data integration issues that stem from business combinations!

And data management hasn’t any upside: nobody in your chain of command notices when it’s going well, and everyone reacts when it fails.

Nonetheless, reliable data is vital for informative performance evaluation and effective risk management, especially at the enterprise level. It doesn’t matter how hard it is to collect, format, sort, and reconcile the data from custodians and market data services as well as your firm’s own systems (all too often including spreadsheets) in multiple departments. Without timely, accurate, properly classified information on all the firm’s long and short positions across asset classes, markets, portfolios, issuers, and counterparties, you can’t know where you stand. You can’t answer questions. You can’t do your job.

Adding up the direct, explicit costs of managing data internally is a straightforward exercise; the general ledger keeps track of license fees. The indirect, implicit costs are less transparent. For example, they include the portion of IT, accounting, and administrative salaries and benefits attributable to mapping data to the performance measurement system and the risk models, coding multiple interfaces, maintaining the stress testing environment, correcting security identifiers and input errors—all the time-consuming details that go into supporting the middle office. The indirect costs also include ongoing managerial attention and the potential economic impact of mistakes that are inevitable if your company does not have adequate staffing and well-documented, repeatable, auditable processes in place to support smooth performance measurement and risk management operations.

You can’t delegate responsibility for the integrity of the raw input data provided by your firm’s front office, portfolio assistants, traders, and security accountants. But you can outsource the processing of that data to a proven provider of hosting services. And then your analysts can focus on the things they do best—not managing data but evaluating investment results and enterprise risk.

Learn more about FRG’s Hosting Services here.

Subscribe to our blog!