IFRS 17: Bridging the Gap

Existing accounting practices in the insurance industry are inconsistent and unclear. This has led to the issuance of IFRS 17 Insurance Contracts – the first international Standard for insurance contracts. Because this Standard is based solely on principle and does not provide explicit guidance, insurers will need to identify their own best practices of IFRS 17 for their financial reporting. This blog post discusses some of the gaps insurers will face when implementing the new Standard.

Knowledge Gap

The biggest hurdle in implementing IFRS 17 is understanding the Standard itself. Such a knowledge gap would affect project planning and execution, as IFRS 17 involves business decisions specific to the insurers’ products and reporting procedures. For example, is the insurer currently able to generate IFRS 17 future cash flows, or do they require an actuarial system to project them? Is the insurer’s general ledger (GL) system capable of processing new IFRS 17 accounting events, or do they need to expand on their existing system? Thus, insurers should be familiar with the Standard to recognize which tools work best for their business and choose a solution that fits their needs.

Data Gap

Because IFRS 17 stresses the transparency of cash flows, insurers need to ensure that they have all the data required to measure their insurance contract liabilities. For instance, IFRS 17 measurements involve historical, current, and future cash flows – but are all these cash flows readily available for calculations? In addition, insurers will need to separate their contracts into insurance and non-insurance components, such as embedded derivatives and distinct investment components, since they fall under the scope of another Standard (IFRS 9). It is crucial to determine how much data transformation will be necessary for the implementation of IFRS 17, and how to address information that is not yet accessible.

Systems Gap

Once the data gaps are identified, insurers will need to recognize how these requirements will impact their current systems. For example, will new applications be necessary to support the implementation of the Standard? Will insurers be required to expand their source systems considering the level of granularity and volume of data needed for IFRS 17? What is the best way to introduce IFRS 17 GL accounts to the insurers’ existing accounting system? How should insurers validate their implementation results? These are some factors to consider when planning to meet the IFRS 17 requirements.

Process Gap

If new systems need to be established, then insurers will likely also need to re-configure their financial reporting process. Because IFRS 17 requires information from multiple business departments (e.g., actuarial, IT, and finance), insurers need to recognize the level of commitment required to implement the Standard from an organizational perspective. They must define the roles and responsibilities expected of each department to drive any process change. These departments must communicate with each other and collaborate appropriately to maintain an efficient workflow. One approach insurers can take is to develop and implement a standard operating procedure for the entire IFRS 17 reporting process, which would communicate details of the process in a central guiding document.

Conclusion

In conclusion, insurers need to examine their existing gaps in knowledge, data, systems, and processes to implement the IFRS 17 Standard. With plenty of components to review, insurers should begin gap assessments as early as possible and begin reaching out to industry experts to accelerate this process.

 

Carmen Loh is a Risk Consultant with FRG. She graduated with her Actuarial Science degree in 2016 from Heriot-Watt University before joining FRG in the following fall. She is currently the subject matter expert on an IFRS 17 implementation project for a general insurance company in the APAC region.

 

RELATED:

The 5Ws and H of IFRS 17 (Part 1)

The 5Ws and H of IFRS 17 (Part 2)

Data Governance in FIs: Root Cause Analysis

This series focuses on Data Governance in Financial Institutions. Our first post introduced the fundamentals of Data Governance. This discussion centers on how to find root causes of problems in organizations and recommend actions to solve them.

 When analyzing deep issues and causes, it is important to take a comprehensive and holistic approach. Root Cause Analysis (RCA) is the systematic problem-solving approach intended to identify root causes of problems or events. It is based on the principle that problems are most effectively solved by correcting or eliminating the primary causes, rather than only addressing the symptoms. Properly done, RCA can help a Financial Institution implement an effective Data Governance program by investigating and addressing data quality issues. It can also help the FI design the appropriate data governance policies and data standards.

What is a Root Cause?

A root cause is an initiating event or condition in a cause-and-effect chain. It must be subject to change; that is, there is a definable factor that can be adjusted to create a positive outcome or to prevent a negative one.

A root cause must also meet four criteria:

  1. It is an underlying event that initiates a sequence of subsequent events
  2. It is logically and economically practical to identify
  3. It can be affected by management actions
  4. It is a practical basis to formulate and recommend corrective actions

The Process of RCA

There is no prescriptive process for RCA, but there are five steps that can help guide organizations:

5 steps for the Root Cause Analysis process

These steps are best completed via an iterative approach rather than a sequential one, to encourage regular participant feedback and continuous improvement based on that feedback.

Describe the Problem

When describing a problem, start with a factual statement of what is happening and why it is a problem. The following questions can also help when describing a problem:

  • When did the problem first occur?
  • Is it continuous or occasional?
  • Has the frequency of occurrence increased or decreased over time?
  • Who are the stakeholders and what processes are involved?

Gather the Data

Once the problem is defined, you can then begin gathering the data. Gathering the data usually entails collecting and reviewing examples of problem instances. You can also use these techniques to seek possible causes:

  • A review with subject matter experts (SMEs): does this root cause make sense?
  • A brainstorming session with SMEs and stakeholders: what do you think the problem could be?
  • Change analysis: what changed when the problem started?
  • Identification of archetypes: are there common patterns of behavior in systems?
  • Compare and contrast: when does the problem happen and when does it not?

Model Causal Changes

A causal model is a conceptual model that describes the causal mechanisms of a system. There are various techniques that can be used for modeling causal changes, but this blog will focus on the most common and widely useful techniques, including Five Whys, Fishbone Diagramming, and Causal Loops.

Five Whys

Five Whys is a good tool for identifying a single most prominent cause.

How to complete the Five Whys

  • Write down the specific problem to help formalize the problem and describe it completely.
  • Ask “why” the problem happens and write the answer down below the problem.
  • If the answer provided does not identify the root cause of the problem that you wrote down in Step 1, ask “why” again and write that answer down.
  • Loop back to step 3 until the team agrees that the problem’s root cause is identified. This may take fewer or more than five times.

Five Whys Example

5 whys diagram example

Fishbone Diagramming

Fishbone Diagramming is effective for causal hierarchy and linear chains with multiple causes.

How to complete the Fishbone Diagram:

  1. Identify the problem statement and write it at the mouth of the fish.
  2. Identify the major categories of causes of the problem and write them as branches of the main arrow for each of the major categories. Some examples include equipment or supply factors, environmental factors, rules/policy/procedure factors, and people/staff factors.
  3. Ask “why” a major category cause happens and write the answer as a branch from the appropriate category.
  4. Repeat the other categories by asking “why” about each cause.
  5. Write sub-causes branching off the cause branches.
  6. Ask “why” and generate deeper levels of causes and continue organizing them under related causes or categories until the root cause is identified.

Fish Diagram Example

Fishbone diagram example

 

Causal Loops

Causal Loops work well for complex situations that involve circles of influence.

How to complete Causal Loops:

  1. Identify the nouns or variables that are important to the issue.
  2. Fill the “verbs” by linking the variables together and determining how one variable affects the other. Generally, if two variables move in the same direction or have a positive relationship, the link would be denoted as an “s”. If the two variables move in an opposite direction or have a negative relationship, the link would be labeled by an “o”.
  3. Determine if the links in the loop will produce a reinforcing or balancing causal loop and label them accordingly. To determine the type of the loop, count the number of “o’s”. If there are an even number of “o’s” or none are present, it is a reinforcing loop. If there are an odd number of “o’s”, it is a balancing loop.
  4. Walk through the loops and “tell the story” to be sure the loops capture the behavior being described.

Causal Loop Example

Causal Loop example

 

The model causal changes techniques can be used alone or in combination with one another to get as much information on the problem as possible.

Identify Root Causes

When identifying the root cause(s), it is a good idea to ask questions like “Where can I remove or correct the issue?” or “Where can I minimize the effect?”. It is worth noting that good analysis is actionable analysis, so if there is not enough information to answer these questions, it may be a good idea to circle back to the previous steps in the process.

Recommend Actions

And finally, when recommending actions, we want to eliminate interference and errors, improve processes, and consider side-effects of actions. It is beneficial to plan ahead to predict the effects of your solution so you can spot potential failures before they happen. It is important to learn from underlying issues within the root cause so that you can apply what you learned to systematically prevent future issues. RCA may require multiple corrective actions but if a root cause is identified correctly, it is unlikely that problems will reoccur.

Conclusion

RCA is an essential way to perform a comprehensive and system-wide review of significant problems as well as the factors that led to them. By following the process of RCA above, you will be able to describe a problem, gather data, model casual changes, identify root causes, and ultimately recommend actions that lead to a long-term solution.

 

RELATED:

Data Governance in FIs: Intro to Data Governance

REFERENCES:

Carol Newcomb on The Data Roundtable. “A Data Governance Primer, Part 1: Finding the Root Cause.” The Data Roundtable, 4 Sept. 2013, https://blogs.sas.com/content/datamanagement/2013/09/04/a-data-governance-primer-part-1-finding-the-root-cause/.

“Causal Loop Construction: The Basics.” The Systems Thinker, 14 Jan. 2016, https://thesystemsthinker.com/causal-loop-construction-the-basics/.

Cause and Effect Analysis: Using Fishbone Diagram and 5 Whys, https://www.visual-paradigm.com/project-management/fishbone-diagram-and-5-whys/.

“Determine the Root Cause: 5 Whys.” ISixSigma, 27 Nov. 2018, https://www.isixsigma.com/tools-templates/cause-effect/determine-root-cause-5-whys/.

“ELearningCurve.” Information & Data Management Courses & Certification Online,                       https://ecm.elearningcurve.com/category_s/213.htm.

“Root Cause Analysis Explained: Definition, Examples, and Methods.” Tableau, https://www.tableau.com/learn/articles/root-cause-analysis.

Data Governance in FIs: Intro to Data Governance

This blog series will focus on Data Governance in Financial Institutions. Our first post introduces data governance fundamentals. It will be followed by a discussion of root cause analysis, metadata, the five stages of data governance deployment, and a final blog that crafts a business case for data governance.

Today’s industry leaders recognize data among their top enterprise assets. According to Gartner, the leading global research firm, 20-25% of enterprise value is directly attributed to the quality of its data. However, Financial Institutions (FIs) often underutilize this key business driver by not establishing a formal data strategy.

Let’s look at some of the challenges to building a data strategy, opportunities for implementing a data strategy, critical components of a successful DG program, and the aspects of data that can be governed. We also want to discuss some potential consequences of poor DG implementation and the most important step to mitigate the risk of it occurring in a Financial Institution.

What is Data Governance?

Data Governance (DG) serves as the framework for defining the who, what, when, how, where and why of your formal data strategy. Through the collection of policies, roles, and processes, DG ensures the proper definition, management, and use of data towards achieving enterprise goals.

Challenges of Building a Data Strategy

Too often, the largest hindrance to building a data- and analytics-driven enterprise is the enterprise itself. For historical reasons, data tends to be siloed within internal business units, resulting in disparate collections of overlapping yet inconsistent data. Given that data is built and accumulated over time in various places in the organization, often via mergers and acquisitions, it can be difficult and time-consuming to gather and use the data.

Without a transparent view of enterprise-wide data, credible decision making becomes nearly impossible. More time is spent gathering and consolidating the data than analyzing it. The goal, then, of DG is to break down the silos in which data becomes segregated and foster a holistic approach towards managing common data. Common data creates a shared understanding of data information and is of paramount importance when sharing data between different systems and/or groups of people.

With the proper implementation of DG standards (data naming, quality, security, architecture, etc.), a firm can realize a variety of optimization-based benefits.

Data Strategy Opportunities

An enterprise that properly implements and executes DG creates opportunities for enhanced productivity.

For example, if an enterprise works with large data sets, having defined naming standards allows for data consistency across all commonly used domains (i.e., Customer, Transactions, Employee, etc.) within the enterprise. This results in increased productivity and a competitive advantage relative to other firms.

As DG improves operational efficiencies, FIs can expect increased customer satisfaction rates, attracting both a loyal following from current customers and new prospects.

Critical Components of a Successful Data Governance Program

FIs have a lot of information as part of their normal business processes so it may be difficult to identify what data needs to be governed.

It is important to note that not all data needs to be governed. There are two types of data that do not need DG: department-specific data and application data not needed for regulatory reporting or cross-department communication.

However, there are three key types of data that should be governed to provide reliable information that can be leveraged across all departments of the FI:

  • Strategic data is unique and usually created within the company, providing a competitive advantage to the firm. A few examples include data about customer insight, market insight, and risk models.
  • Critical data ‘materially affects’ most external reporting, risk management, and/or supports critical business functions. This includes financial data, supply chain data, and counterparty data.
  • Shared data is used in multiple business processes in which the definition, quality, and format needs to be synchronized. For example, customer data for marketing, customer service and sales, and counterparty data for risk management and pricing.

Critical Data Aspects

Beyond the data itself, there are multiple aspects of data that are critical to govern. A successful program will consider the following:

 

Data Ownership: The possession of and responsibility for information

Data Handling: Ensuring that research data is stored, archived or disposed of in a safe and secure manner

Data Allowable Values: Some data types let you specify that a property is restricted to a set of values

Meta Data: A set of data that describes and gives information about other data

Data Storing: The recording of information in a storage medium

Data Architecture: The structure of an organization’s logical and physical data assets and data management resources

Data Quality: The state of qualitative or quantitative pieces of information

Data Definitions: The origin of a field that references a data domain and determines the data type and the format of data entry

Data Reporting: Collecting and formatting raw information and translating it into a digestible format to assess business performance

Poor DG Consequences

A word of caution: There is such thing as poor DG implementation. If your program is poorly built, the enterprise will suffer.

Building inefficient processes, for example, can delay timelines for tasks like data retrieval and data analysis.

An inferior DG implementation may also create compliance issues. If the program is difficult to understand, enterprise employees may disregard your guidelines.

Overall, if DG is applied within internal silos, it cannot be optimized across the organization. The segregation of data that internal silos create needs to be broken down to achieve the goal of managing common data.

How to Mitigate Poor DG Risk

The entire FI must “buy in” to a DG program to be most effective. Without assistance from both data practices and business functions in the rollout of DG program initiatives, the program will likely fail. It is the responsibility of the business, IT, and internal operations facets to be fully engaged and coordinated within the implementation of DG program initiatives.

What’s Next?

Now that we have outlined what a successful Data Governance program includes, it is time to discuss Root Cause Analysis. Our next post in this series will discuss how to find root causes in FIs and recommend actions to solve problems that you may face when implementing a DG program.

 

RESOURCES:

“ELearningCurve.” Information & Data Management Courses & Certification Online, https://ecm.elearningcurve.com/category_s/213.htm. 

 Data Ownership, https://ori.hhs.gov/education/products/n_illinois_u/datamanagement/dotopic.html. 

 Data Handling, https://ori.hhs.gov/education/products/n_illinois_u/datamanagement/dhtopic.html. 

 “Administering and Working with Oracle Enterprise Data Management Cloud.” Oracle Help Center, 24 Nov. 2021, https://docs.oracle.com/en/cloud/saas/enterprise-data-management-cloud/dmcaa/property_allowed_values_102x340a19b7.html. 

 “Metadata.” Wikipedia, Wikimedia Foundation, 23 Dec. 2021, https://en.wikipedia.org/wiki/Metadata. 

 “What Is Data Storage?” IBM, https://www.ibm.com/topics/data-storage. 

 Olavsrud, Thor, and Senior Writer. “What Is Data Architecture? A Framework for Managing Data.” CIO, 24 Jan. 2022, https://www.cio.com/article/190941/what-is-data-architecture-a-framework-for-managing-data.html

 “What Is Data Quality? Definition and Faqs.” OmniSci, https://www.omnisci.com/technical-glossary/data-quality

 “Data Definitions.” IBM, https://www.ibm.com/docs/en/ecm/10.1.3?topic=objects-data-definitions. 

 “What Is Data Reporting and Why It’s Important?” Sisense, 21 May 2021, https://www.sisense.com/glossary/data-reporting/

 

Optimal Pacing for Private Assets: An Example

The FRG Private Capital Forecasting (PCF) solution recently released a module for optimal pacing.  Pacing refers to the planning of future commitments. Future commitments encompass a decision on commitment size as well as a decision on commitment timing. This is done to help balance a portfolio’s needs to achieve or maintain allocations to private capital vehicles.

Creation of pacing plans is not straightforward. The plan should consider not just the allocations to private asset classes, but to other asset classes as well. It also needs to balance these commitments with liquidity constraints and the need to keep all asset classes from breaching risk limits.

The Pacing Module combines the class-leading PCF forecasting simulation with stated portfolio goals for allocation and limits with a non-linear optimizer to create optimal pacing plans.

This blog posts walks through an example pension fund that is in a net distribution scenario.  They are actively selling down the portfolio to fund retirements. Further, they have only recently begun to invest in private capital. Their allocations are low and need to be brought up to target.

PCF has been configured with this information. The fund manager has specified semi-annual rebalancing for the public side of the portfolio.

The simulation will run through 2027 and the plan will be created for vintage years 2020 – 2024.  Investments will be planned in the sub-asset classes of Private Equity (Buyout and Fund of Funds), Real Estate, and Venture.

The fund is experiencing outflows.  Overall NAV of the portfolio is declining.

Charts showing allocation and NAV

 

The first graph shows the expected cash needs per quarter to fund employee retirements.  This represents cash leaving the portfolio and leading to a decline in total NAV.

The second chart shows the NAV growing out of the pandemic recession and then beginning to decline as cash requirements outstrip fund growth.  The mean and inner quartile range from the simulation are plotted.

Because NAV of the portfolio is declining pacing is extremely challenging. Investing too much risks an illiquid portfolio, unable to be sold to meet retiree needs.  Investing too little might mean that the portfolio does not reach its allocation target and undershoots the expected return.  The portfolio manager is in a bind.

Graphs shows portfolio allocation total before PCF optimization

The fund is underweight to private assets.  The manager needs to build the portfolio allocation to meet the expected return target, but as stated above, investing too much could cause liquidity problems down the road.

The Pacing module takes the simulation of the portfolio and optimally chooses which and how much vintages to invest in.  Once the optimization has been run, we can see the allocations through time are better in line with the targets:

Chart showing improved allocations after optimization has been run.

 

If you would like more information about VOR Private Capital Forecasting or the VOR Pacing Optimization Module, please download our white papers here or contact our VOR team.

Dominic Pazzula is FRG’s Director of Risk and Asset Allocation. He is a specialist in investment management, asset allocation, portfolio construction, and risk management. 

The 5 Ws and H of IFRS 17 (Part 2)

Previously, we talked about the 5 Ws of IFRS 17. This blog post (Part 2) will discuss the H: How does IFRS 17 replace IFRS 4?

A Consistent Model

Figure 1: The components that make up IFRS 17 insurance contract liabilities.[1]

IFRS 17 introduces the General Measurement Model (GMM) to calculate insurance contract liabilities for all insurance and reinsurance contracts. It is made up of three components:

  1. Present Value of Future Cash Flows (PVFCF)
    1. Expected future cash flow – The current estimates of cash inflows and cash outflows e.g., premiums, claims, expenses, acquisition costs, etc.
    2. Discount Rates – Current market discount rates, which are used to normalize the present value of expected future cash flows.
  2. Risk Adjustment (RA) – The compensation a company requires for bearing insurance risk. Insurance risk is a type of non-financial risk and may consist of the uncertainty of cash flows, the timing of cash flows, or both.
  3. Contractual Service Margin (CSM) – The equal and opposite amount to the net cash inflow of the two previous components. This ensures there is no day-one profit recognized in Profit or Loss (P&L) for all contracts.

 

More Transparent Information

Figure 2: How IFRS 17 recognizes profit in P&L.[2]

IFRS 17 only allows insurers to recognize profit once insurance services are provided. This means that insurers can no longer recognize the premiums they receive as profit in P&L. Rather, at the end of each reporting period, insurers will report the portion of the CSM remaining as Insurance Revenue after they fulfil obligations such as paying claims for insured events.

Insurance Service Expenses reflect the costs incurred when fulfilling these obligations for a reporting period. This consists of incurred claims and expenses, acquisition costs, and any gains or losses from holding reinsurance contracts. The net amount of Insurance Revenue and Insurance Service Expenses make up the Insurance Service Result. This approach differentiates the two drivers of profit for the insurer: Insurance Revenue and Investment Income. Investment Income represents the return on underlying assets of investment-linked contracts, and Insurance Finance Expenses reflects the unwinding and changes in discount rates used to calculate PVFCF and CSM.

Better Comparability

Figure 3: A comparison of IFRS 4 and IFRS 17.[3]

Regarding presentation of financial statements, IFRS 17 requires more granularity in the balance sheet than IFRS 4 (Figure 3), specifically on the breakdown of insurance contract liabilities: PVFCF, RA, and CSM. This allows for improved analysis of the insurer’s products and their business performance.

On the statement of comprehensive income, IFRS 17 has removed Premiums and replaced Change in Insurance Contract Liabilities with the new components introduced in the balance sheet – PVFCF, RA and CSM. Now, the first items listed present the insurance components that make up Insurance Service Result. This is followed by Investment Income and Insurance Finance Expenses, which together determine the Net Financial Result. With a clear distinction of the different sources of profit, this framework allows for better comparability among industries.

Conclusion

In summary, IFRS 17 is the accounting Standard that introduces a consistent model for measuring liabilities for all insurance contracts. It also increases the transparency of the source of insurance-related earnings by separating insurance services from investment returns, which provides global comparability for the first time in the insurance industry.

[1] Appendix B – Illustrations, IFRS 17 Effects Analysis by the IFRS Foundation (page 118).

[2] Preview of IFRS 17 Insurance Contracts, National Standard-Setters webinar by the IFRS Foundation (Slide 9).

[3] Preview of IFRS 17 Insurance Contracts, National Standard-Setters webinar by the IFRS Foundation (Slide 11).

Carmen Loh is a Risk Consultant with FRG. She graduated with her Actuarial Science degree in 2016 from Heriot-Watt University before joining FRG in the following fall. She is currently the subject matter expert on an IFRS 17 implementation project for a general insurance company in the APAC region.

RELATED:

The 5 Ws and H of IFRS 17 (Part 1)

 

The 5 Ws and H of IFRS 17 (Part 1)

International Financial Reporting Standards (IFRS) 17 Insurance Contracts, issued in 2017, represents a major overhaul on financial reporting for insurance companies. However, many in the financial industry are still unfamiliar with the Standards. This blog post, Part 1, aims to answer the five basic W questions Who, What, When, Where, and Why of IFRS 17. The H, or How, will be discussed in Part 2.

Who issued IFRS 17?

IFRS 17 is issued by the International Accounting Standards Board (IASB). The IASB specifies how companies must maintain and report their accounts.

What is IFRS 17?

IFRS 17 is the accounting Standard for insurance contracts. The IFRS are designed to bring consistency, transparency, and comparability within financial statements across various global industries, and IFRS 17 applies this approach to the insurance business.

When is the effective date?

Initially set for January 1, 2021, industry leaders have requested to delay the effective date due to the amount of effort required to implement the new Standard alongside IFRS 9. Additionally, in March 2020 the COVID-19 pandemic influenced the IASB to defer the final effective date for IFRS 17 to January 1, 2023.

Where does IFRS 17 apply?

IFRS 17 applies to all insurance companies using the IFRS Standards. Currently, it has been estimated that 450 insurance companies worldwide will be affected. Insurance companies in Japan and the United States, however, use Generally Accepted Accounting Principles (GAAP)—a rules-based approach that is rigorous compared to the principle-based approach of IFRS. Therefore, IFRS 17 does not directly impact Japan or the U.S., but it could affect related multinational companies with insurance business overseas.

Why was IFRS 17 developed?

This section discusses some of the main issues with the current reporting standards (IFRS 4) which has led to the issuance of IFRS 17. They include inconsistent accounting, little transparency, and lack of comparability (see Figure 1).

Figure 1: Some of the main issues with IFRS 4.[1]

Inconsistent Accounting

IFRS 17 was developed to replace IFRS 4, which was an interim Standard meant to limit changes to existing insurance accounting practices. As IFRS 4 did not provide detailed guidelines, there were many questions left unanswered about the expectations for insurers:

  • Are they required to discount their cash flows?
  • What discount rates should they use?
  • Do they amortize the incurred costs or expense them immediately?
  • Are they required to consider the time value of money when measuring the liabilities?

Hence, insurers came up with different practices to measure their insurance products.

Little Transparency

Analyzing financial statements has been difficult as some insurers do not provide complete information about the sources of profit recognized from insurance contracts. For example, some companies immediately recognize premiums received as revenue. There are also companies that do not separate the investment income from investment-linked contracts when measuring insurance contract liabilities. As a result, regulators cannot determine if the company is generating profit by providing insurance services or by benefiting from good investments.

Lack of Comparability

Some multinational companies consolidate their subsidiaries using different accounting policies, even for the same type of insurance contracts written in different countries. This makes it challenging for investors to compare the financial statements across different industries to evaluate investments.

How does IFRS 17 replace IFRS 4?

IFRS 17 introduces a standard model to measure insurance contract liabilities, changes the way insurers recognize profit (Insurance Revenue), and revamps the presentation of financial statements (see Figure 2). We will dive into these topics in Part 2 of this blog series.

Figure 2: A comparison of IFRS 4 and IFRS 17.[2]

 

[1] Appendix B – Illustrations, IFRS 17 Effects Analysis by the IFRS Foundation (page 118).

[2] Appendix B – Illustrations, IFRS 17 Effects Analysis by the IFRS Foundation (page 118).

 

Carmen Loh is a Risk Consultant with FRG. She graduated with her Actuarial Science degree in 2016 from Heriot-Watt University before joining FRG. She is currently the subject matter expert on an IFRS 17 implementation project for a general insurance company in the APAC region.

Model Updates for FRG’s VOR PCF Ensure Continuous Improvement

FRG regularly launches new models that will enhance the predictive capability of our VOR Private Capital Forecasting (PCF) solution. Together with our partner Preqin, FRG launched PCF last year to help private capital investors better forecast cash flows. Since then, behind the scenes our Business Analytics team has been hard at work fine-tuning the models used to analyze the probability distribution of cash flows generated by private capital investments.

PCF uses next-generation modeling techniques allowing us to incorporate macro-economic data into cash flow models to better forecast the timing and magnitude of Capital Calls and Capital Distributions. This gives our clients the ability to stress test their portfolios for different economic scenarios.

FRG uses four models to forecast the cash flows important for private equity funds:

  • Probability of Call
  • Probability of Distribution
  • Size of the Call
  • Size of the Distribution

The models are assessed for fit and robustness quarterly, when data updates from Preqin are incorporated. But our team of data scientists is always working to make them better and more predictive.

Throughout the past year the team has specifically refitted the models to remove LIBOR dependent variables, recognizing that LIBOR availability will not be guaranteed past 2021. We further refined the models with our goal to improve the new models’ out of sample performance relative to the current models. Our model approval committee has concluded that these current models, like their predecessors, continue to outperform the Takahashi Alexander (Yale) model consistently for all vintages dating back more than 20 years.

For more information on our PCF tool, please visit our website.

AI in FIs: Learning Types and Functions of Machine Learning Algorithms

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

This blog series has covered how a financial institution (FI) can use machine learning (ML) and how these algorithms can augment existing methods for mitigating financial and non-financial risk. To tie it all together, the focus now will be on different learning types of ML algorithms:

  1. Supervised Learning
  2. Unsupervised Learning
  3. Semi-Supervised Learning

Deciding which learning type and ultimately which algorithm to use depends on two key factors: the data and the business use. In regards to data, there are two “formats” in which it exists. The first type is structured data. This type of data is organized and often takes the form of tables (columns and rows).  The second type is unstructured data. This type of data may have a structure of its own, but is not in a standardized format. Examples of these include PDFs, recorded voice, and video feed. This data can provide great value but will need to be reformatted so an algorithm can consume it.

 

Learning Types and Functions of MLA

 

Supervised learning algorithms draw inferences from input datasets that have a well-defined dependent, or target, variable. This is referred to as labeled data. Consider the scenario when an FI wants to predict loss due to fraud. For this they would need a labeled dataset containing historical transactions with a target variable that populates for a known fraudulent transaction. The FI might then use a decision tree to separate the data iteratively into branches to determine estimates for likelihood of fraud. Once the decision tree captures the relationships in the data, it can then be deployed to estimate the potential for future fraud cases.

Unsupervised learning algorithms draw inferences from input datasets with an undefined dependent variable. This is referred to as unlabeled data. These kinds of algorithms are typically used for pre-work to prepare data for another process. This work ranges from data preparation to data discovery and, at times, includes dimensionality reduction, categorization, and segmentation. Returning to our fraud example, consider the data set without the target variable (i.e., no fraud indicator). In this scenario, the FI could use an unsupervised learning algorithm to identify the most suspicious transactions through means of clustering.

Sometimes, a dataset will have both labeled and unlabeled observations, meaning a value for the target variable is known for a portion of the data. Data in this case can be used for semi-supervised learning, which is an iterative process that utilizes both supervised and unsupervised learning algorithms to complete a job. In our fraud example, a neural net may be used to predict likelihood of fraud based on the labeled data (supervised learning). The process can then use this model, along with a clustering algorithm (unsupervised learning), to assign a value to the fraud indicator for the most suspicious transactions in the unlabeled data.

 To learn more about ML algorithms and their applications for risk mitigation, please contact us or visit our Resources page for other ML and AI material, including the New Machinist Journal Vol. 1 – 5 .

 Hannah Wiser is an associate consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

List of Terms to Know

AI in FIs: Introducing Machine Learning Algorithms for Risk

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

For any application of machine learning (ML) being considered for industry practice, the most important thing to remember is that business needs must drive the selection and design of the algorithm used for computation. A financial institution (FI) must be smart about which of these advanced tools are deployed to generate optimal value for the business. For many FIs, this “optimal value” can refer to one of two categories: increasing profitability or mitigating risk. In this post, we will focus on the uses cases for ML specifically related to risk.

Risk can be broken out between financial risk and nonfinancial risk. Financial risk involves uncertainty in investment or business that can result in monetary loss.  For example, when a homeowner defaults on a loan, this means the lender will lose some or all those funds.

Nonfinancial risk, on the other hand, is loss an FI experiences from consequences not rooted in financial initiatives. Certain events, such as negative news stories, may not be directly related to the financial side of the business but could deter potential customers and hirable talent. Some areas of risk may be considered either financial or nonfinancial risk, depending on the context.

When properly employed, ML enhances the capabilities of FIs to assess both their financial and nonfinancial risk in two ways. First, it enables skilled workers to do what they do best because they can off-load grunt work, such as cleaning data, to the machine. By deploying a tool to support existing (and cumbersome) business operations, the analyst has more time to focus on their specialty. Second, a machine has the technical capability to reveal nuance in the data that even a specialist would not be able to do alone. This supplements the analyst’s understanding of the data and enriches the data’s worth to the business.

The image below elaborates on the many kinds of risk managed by an FI, in addition to practical ways ML can supplement existing methods for risk mitigation.

More complex algorithms may do a better job of fitting the data, model at a higher capacity, or utilize non-traditional types of data (e.g., images, voice, and PDFs, etc.), but this all comes at a cost. The intricacies of implementing an ML algorithm, the commitment of time required to build a model (i.e., tuning hyperparameters can take days), and the management of unintended bias and overfitting render ML a considerable investment of resources. Not to mention, the robust requirements for computational power may require an FI to do some pre-work if a stable and capable infrastructure is not already in place.

As innovative as ML can be, any process will only be successful in industry if it produces value beyond its costs. Thanks to advances in computational power and available data, new approaches (e.g., neural nets) have broadened the universe of ML and its relevance, as well as better enabled traditional methods of ML (e.g., time series models). We will expand more on specific algorithms and risk mitigation use cases in later discussions.

Interested in reading more? Subscribe to the FRG blog to keep up with AI in FIs.

Hannah Wiser is an assistant consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

 

AI in FIs: Foundations of Machine Learning in Financial Risk

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

Today, opportunities exist for professionals to delegate time-intensive, dense, and complex tasks to machines. Machine Learning (ML) has the ability to automate Artificial Intelligence (AI) and is becoming much more robust as technological advances ease and lessen resource constraints.

Financial Institutions (FI) are constantly under pressure to keep up with evolving technology and regulatory requirements. Compared to what has been used in the past, modern tools have become more user-friendly and flexible; they are also easily integrated with existing systems. This evolution is enabling advanced tools such as ML to regain relevance across industries, including finance.

So, how does ML work? Imagine someone is learning to throw a football. Over time, the to-be quarterback is trained to understand how to adjust the speed of the ball, the strength of the throw, and the path of trajectory to meet the expected routes of the receivers. In a similar way, machines are trained to perform a specific task, such as clustering, by means of an algorithm. Just as the quarterback is trained by a coach, a machine learns to perform a specific task from an ML algorithm. This expands the possibilities for ways technology can be used to add value to the business.

What does this mean for FIs? The benefit of ML is that value can be added in areas where efficiency, prediction, and accuracy are most critical.  To accomplish this, the company aligns these four components: data, applications, infrastructure, and business needs.

Flow Chart showing how a company might align critical business needs.

The level of data maturity of FIs determines their capacity for effectively utilizing both structured and unstructured data. A well-established data governance framework lays the foundation for proper use of data for a company. Once their structured data is effectively governed, sourced, analyzed, and managed, they can then employ more advanced tools such as ML to supplement their internal operations. Unstructured data can also be used, but the company must first harness the tools and computing power capable of handling it.

Many companies are turning to cloud computing for their business-as-usual processes and for deploying ML. There are options for hosting cloud computing either on-premises or with public cloud services, but these are a matter of preference. Either method provides scalable computing power, which is essential when using ML algorithms to unlock the potential value that massive amounts of data provides.

Interested in reading more? Subscribe to the FRG blog to keep up with AI in FIs.

Hannah Wiser is an assistant consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

 

Subscribe to our blog!