The 5 Ws and H of IFRS 17 (Part 1)

International Financial Reporting Standards (IFRS) 17 Insurance Contracts, issued in 2017, represents a major overhaul on financial reporting for insurance companies. However, many in the financial industry are still unfamiliar with the Standards. This blog post, Part 1, aims to answer the five basic W questions Who, What, When, Where, and Why of IFRS 17. The H, or How, will be discussed in Part 2.

Who issued IFRS 17?

IFRS 17 is issued by the International Accounting Standards Board (IASB). The IASB specifies how companies must maintain and report their accounts.

What is IFRS 17?

IFRS 17 is the accounting Standard for insurance contracts. The IFRS are designed to bring consistency, transparency, and comparability within financial statements across various global industries, and IFRS 17 applies this approach to the insurance business.

When is the effective date?

Initially set for January 1, 2021, industry leaders have requested to delay the effective date due to the amount of effort required to implement the new Standard alongside IFRS 9. Additionally, in March 2020 the COVID-19 pandemic influenced the IASB to defer the final effective date for IFRS 17 to January 1, 2023.

Where does IFRS 17 apply?

IFRS 17 applies to all insurance companies using the IFRS Standards. Currently, it has been estimated that 450 insurance companies worldwide will be affected. Insurance companies in Japan and the United States, however, use Generally Accepted Accounting Principles (GAAP)—a rules-based approach that is rigorous compared to the principle-based approach of IFRS. Therefore, IFRS 17 does not directly impact Japan or the U.S., but it could affect related multinational companies with insurance business overseas.

Why was IFRS 17 developed?

This section discusses some of the main issues with the current reporting standards (IFRS 4) which has led to the issuance of IFRS 17. They include inconsistent accounting, little transparency, and lack of comparability (see Figure 1).

Figure 1: Some of the main issues with IFRS 4.[1]

Inconsistent Accounting

IFRS 17 was developed to replace IFRS 4, which was an interim Standard meant to limit changes to existing insurance accounting practices. As IFRS 4 did not provide detailed guidelines, there were many questions left unanswered about the expectations for insurers:

  • Are they required to discount their cash flows?
  • What discount rates should they use?
  • Do they amortize the incurred costs or expense them immediately?
  • Are they required to consider the time value of money when measuring the liabilities?

Hence, insurers came up with different practices to measure their insurance products.

Little Transparency

Analyzing financial statements has been difficult as some insurers do not provide complete information about the sources of profit recognized from insurance contracts. For example, some companies immediately recognize premiums received as revenue. There are also companies that do not separate the investment income from investment-linked contracts when measuring insurance contract liabilities. As a result, regulators cannot determine if the company is generating profit by providing insurance services or by benefiting from good investments.

Lack of Comparability

Some multinational companies consolidate their subsidiaries using different accounting policies, even for the same type of insurance contracts written in different countries. This makes it challenging for investors to compare the financial statements across different industries to evaluate investments.

How does IFRS 17 replace IFRS 4?

IFRS 17 introduces a standard model to measure insurance contract liabilities, changes the way insurers recognize profit (Insurance Revenue), and revamps the presentation of financial statements (see Figure 2). We will dive into these topics in Part 2 of this blog series.

Figure 2: A comparison of IFRS 4 and IFRS 17.[2]

 

[1] Appendix B – Illustrations, IFRS 17 Effects Analysis by the IFRS Foundation (page 118).

[2] Appendix B – Illustrations, IFRS 17 Effects Analysis by the IFRS Foundation (page 118).

 

Carmen Loh is a Risk Consultant with FRG. She graduated with her Actuarial Science degree in 2016 from Heriot-Watt University before joining FRG. She is currently the subject matter expert on an IFRS 17 implementation project for a general insurance company in the APAC region.

Model Updates for FRG’s VOR PCF Ensure Continuous Improvement

FRG regularly launches new models that will enhance the predictive capability of our VOR Private Capital Forecasting (PCF) solution. Together with our partner Preqin, FRG launched PCF last year to help private capital investors better forecast cash flows. Since then, behind the scenes our Business Analytics team has been hard at work fine-tuning the models used to analyze the probability distribution of cash flows generated by private capital investments.

PCF uses next-generation modeling techniques allowing us to incorporate macro-economic data into cash flow models to better forecast the timing and magnitude of Capital Calls and Capital Distributions. This gives our clients the ability to stress test their portfolios for different economic scenarios.

FRG uses four models to forecast the cash flows important for private equity funds:

  • Probability of Call
  • Probability of Distribution
  • Size of the Call
  • Size of the Distribution

The models are assessed for fit and robustness quarterly, when data updates from Preqin are incorporated. But our team of data scientists is always working to make them better and more predictive.

Throughout the past year the team has specifically refitted the models to remove LIBOR dependent variables, recognizing that LIBOR availability will not be guaranteed past 2021. We further refined the models with our goal to improve the new models’ out of sample performance relative to the current models. Our model approval committee has concluded that these current models, like their predecessors, continue to outperform the Takahashi Alexander (Yale) model consistently for all vintages dating back more than 20 years.

For more information on our PCF tool, please visit our website.

AI in FIs: Learning Types and Functions of Machine Learning Algorithms

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

This blog series has covered how a financial institution (FI) can use machine learning (ML) and how these algorithms can augment existing methods for mitigating financial and non-financial risk. To tie it all together, the focus now will be on different learning types of ML algorithms:

  1. Supervised Learning
  2. Unsupervised Learning
  3. Semi-Supervised Learning

Deciding which learning type and ultimately which algorithm to use depends on two key factors: the data and the business use. In regards to data, there are two “formats” in which it exists. The first type is structured data. This type of data is organized and often takes the form of tables (columns and rows).  The second type is unstructured data. This type of data may have a structure of its own, but is not in a standardized format. Examples of these include PDFs, recorded voice, and video feed. This data can provide great value but will need to be reformatted so an algorithm can consume it.

 

Learning Types and Functions of MLA

 

Supervised learning algorithms draw inferences from input datasets that have a well-defined dependent, or target, variable. This is referred to as labeled data. Consider the scenario when an FI wants to predict loss due to fraud. For this they would need a labeled dataset containing historical transactions with a target variable that populates for a known fraudulent transaction. The FI might then use a decision tree to separate the data iteratively into branches to determine estimates for likelihood of fraud. Once the decision tree captures the relationships in the data, it can then be deployed to estimate the potential for future fraud cases.

Unsupervised learning algorithms draw inferences from input datasets with an undefined dependent variable. This is referred to as unlabeled data. These kinds of algorithms are typically used for pre-work to prepare data for another process. This work ranges from data preparation to data discovery and, at times, includes dimensionality reduction, categorization, and segmentation. Returning to our fraud example, consider the data set without the target variable (i.e., no fraud indicator). In this scenario, the FI could use an unsupervised learning algorithm to identify the most suspicious transactions through means of clustering.

Sometimes, a dataset will have both labeled and unlabeled observations, meaning a value for the target variable is known for a portion of the data. Data in this case can be used for semi-supervised learning, which is an iterative process that utilizes both supervised and unsupervised learning algorithms to complete a job. In our fraud example, a neural net may be used to predict likelihood of fraud based on the labeled data (supervised learning). The process can then use this model, along with a clustering algorithm (unsupervised learning), to assign a value to the fraud indicator for the most suspicious transactions in the unlabeled data.

 To learn more about ML algorithms and their applications for risk mitigation, please contact us or visit our Resources page for other ML and AI material, including the New Machinist Journal Vol. 1 – 5 .

 Hannah Wiser is an associate consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

List of Terms to Know

AI in FIs: Introducing Machine Learning Algorithms for Risk

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

For any application of machine learning (ML) being considered for industry practice, the most important thing to remember is that business needs must drive the selection and design of the algorithm used for computation. A financial institution (FI) must be smart about which of these advanced tools are deployed to generate optimal value for the business. For many FIs, this “optimal value” can refer to one of two categories: increasing profitability or mitigating risk. In this post, we will focus on the uses cases for ML specifically related to risk.

Risk can be broken out between financial risk and nonfinancial risk. Financial risk involves uncertainty in investment or business that can result in monetary loss.  For example, when a homeowner defaults on a loan, this means the lender will lose some or all those funds.

Nonfinancial risk, on the other hand, is loss an FI experiences from consequences not rooted in financial initiatives. Certain events, such as negative news stories, may not be directly related to the financial side of the business but could deter potential customers and hirable talent. Some areas of risk may be considered either financial or nonfinancial risk, depending on the context.

When properly employed, ML enhances the capabilities of FIs to assess both their financial and nonfinancial risk in two ways. First, it enables skilled workers to do what they do best because they can off-load grunt work, such as cleaning data, to the machine. By deploying a tool to support existing (and cumbersome) business operations, the analyst has more time to focus on their specialty. Second, a machine has the technical capability to reveal nuance in the data that even a specialist would not be able to do alone. This supplements the analyst’s understanding of the data and enriches the data’s worth to the business.

The image below elaborates on the many kinds of risk managed by an FI, in addition to practical ways ML can supplement existing methods for risk mitigation.

More complex algorithms may do a better job of fitting the data, model at a higher capacity, or utilize non-traditional types of data (e.g., images, voice, and PDFs, etc.), but this all comes at a cost. The intricacies of implementing an ML algorithm, the commitment of time required to build a model (i.e., tuning hyperparameters can take days), and the management of unintended bias and overfitting render ML a considerable investment of resources. Not to mention, the robust requirements for computational power may require an FI to do some pre-work if a stable and capable infrastructure is not already in place.

As innovative as ML can be, any process will only be successful in industry if it produces value beyond its costs. Thanks to advances in computational power and available data, new approaches (e.g., neural nets) have broadened the universe of ML and its relevance, as well as better enabled traditional methods of ML (e.g., time series models). We will expand more on specific algorithms and risk mitigation use cases in later discussions.

Interested in reading more? Subscribe to the FRG blog to keep up with AI in FIs.

Hannah Wiser is an assistant consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

 

AI in FIs: Foundations of Machine Learning in Financial Risk

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

Today, opportunities exist for professionals to delegate time-intensive, dense, and complex tasks to machines. Machine Learning (ML) has the ability to automate Artificial Intelligence (AI) and is becoming much more robust as technological advances ease and lessen resource constraints.

Financial Institutions (FI) are constantly under pressure to keep up with evolving technology and regulatory requirements. Compared to what has been used in the past, modern tools have become more user-friendly and flexible; they are also easily integrated with existing systems. This evolution is enabling advanced tools such as ML to regain relevance across industries, including finance.

So, how does ML work? Imagine someone is learning to throw a football. Over time, the to-be quarterback is trained to understand how to adjust the speed of the ball, the strength of the throw, and the path of trajectory to meet the expected routes of the receivers. In a similar way, machines are trained to perform a specific task, such as clustering, by means of an algorithm. Just as the quarterback is trained by a coach, a machine learns to perform a specific task from an ML algorithm. This expands the possibilities for ways technology can be used to add value to the business.

What does this mean for FIs? The benefit of ML is that value can be added in areas where efficiency, prediction, and accuracy are most critical.  To accomplish this, the company aligns these four components: data, applications, infrastructure, and business needs.

The level of data maturity of FIs determines their capacity for effectively utilizing both structured and unstructured data. A well-established data governance framework lays the foundation for proper use of data for a company. Once their structured data is effectively governed, sourced, analyzed, and managed, they can then employ more advanced tools such as ML to supplement their internal operations. Unstructured data can also be used, but the company must first harness the tools and computing power capable of handling it.

Many companies are turning to cloud computing for their business-as-usual processes and for deploying ML. There are options for hosting cloud computing either on-premises or with public cloud services, but these are a matter of preference. Either method provides scalable computing power, which is essential when using ML algorithms to unlock the potential value that massive amounts of data provides.

Interested in reading more? Subscribe to the FRG blog to keep up with AI in FIs.

Hannah Wiser is an assistant consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

 

Data Management – Leveraging Data for a Competitive Advantage

This is the first in a series of blogs that explore how data can be an asset or a risk to organizations in an uncertain economic climate.

Humanity has always valued novelty. Since the advent of the Digital Age, this preference has driven change at an astronomical pace. For example, more data was generated in the last two years than in the entire human history to date, a concept made more staggering by Machine Learning and Artificial Intelligence tools that allow users to access and analyze data as never before. The question now is: how can business leaders and investors best make sense of this information and use it for their competitive advantage?

Traditionally, access to good data has been a limiting factor. Revolutionary business strategies were reserved for those who knew how to obtain, prepare, and analyze it. While top-tier decision making is still data- and insight-driven, today’s data challenges are characterized more by glut than scarcity, both in terms of overall volume of information and the tools available to make sense of it. As of today, only 0.5% of data that is produced is even analyzed.

This overabundance of information and tech tools has ironically led to greater uncertainty for business leaders. Massive data sets and powerful, user-friendly tools often mask underlying issues, resulting in many firms maintaining and processing duplicates of their data, creating silos of critical but unconnected data that must be sorted and reconciled. Analysts still spend between 80% of their time collecting and preparing their data and only 20% analyzing it.

Global interconnectivity is making the world smaller and more competitive. Regulators, who understand the power of data, are increasing controls over it. Now, more than ever, it is critical for firms to take action. To remain competitive, organizations must understand the critical data that drives their business, so they are able to make use of it and alternative data sets for future decision making; otherwise they face obsolescence. These are not just internal concerns. Clients are also requesting more customized services and demanding to understand how firms are using their information. Firms must identify critical data and understand that all data is not, and should not, be treated the same so they can extract the full power of the information and meet client and regulatory requirements.

Let’s picture data as an onion. As the core of the onion supports it outer layers, the ‘core’ or critical enterprise data supports all the functions of a business. When the core is strong, so is the rest of the structure. When the core is contaminated or rotten – that is a problem, for the onion and for your company.

A comparison picture showing an onion with healthy core data vs. an onion with a contaminated core.

Data that is core to a business – information like client IDs, addresses, products and positions, to name a few examples – must be solid and healthy enough to support the outer layers of data use and reporting in the organization. This enterprise data must be defined, clean and unique, or the firm will waste time cleaning and reconciling it, and the client, business and regulatory reports that it supports will be inaccurate.

How do you source, define and store your data to cleanly extract the pieces you need? Look at the onion again. You could take the chainsaw approach to slice the onion, which would give you instant access to everything inside, good and contaminated, and will probably spoil your dish. Likewise, if you use bad data at the core, any calculations you perform on it or reports aggregating the data will not be correct. If you need a clean slice of onion required by a specific recipe (or calculated data required for a particular report), precision and cleanliness of the slice (good core data and unique contextual definition) is key.

Once your core data is unique, supported and available, clients, business and corporate users can combine it with alternative and non-traditional data sets, to extract information, enhance it and add value. As demand for new “recipes” of data (for management, client or regulatory reporting) is ever increasing, firms who do not clean up and leverage their core data effectively will become obsolete. These demands can be anything from data needed for instant access and client reporting across different form factors (i.e. Web, iOS & Android apps), to data visualization and manipulation tools for employees analyzing new and enhanced information to determine trends. Demand also stems from the numerous requirements needed to comply with the complex patchwork of regional financial regulations across the globe. Many different users, many different recipes, all reliant on the health of their core data (onion core).

What is the actionable advice when you read a headline like: “A recent study in the Harvard Business Review found that over 92% of surveyed firms agreed that data analytics for decision making will be more important 2 years from now”? We have some ideas. In this blog series, FRG Data Advisory & Analytics will take you through several use cases to outline what data is foundational or core to business operations and how to achieve the contextual precision demanded from the market and regulators within our current environment of uncertainty, highlighting both how data can be an asset, or a potential risk, if not treated appropriately.

Dessa Glasser, Ph.D., is an FRG Principal Consultant, with 30+ years experience designing and implementing innovative solutions and organizations in data, risk, and analytics. She leads the Data Advisory & Analytics Team for FRG and focuses on data, analytics and regulatory solutions for clients.  

Edward Hanlon is a Senior Consultant and Engagement Manager on FRG’s Data Advisory & Analytics Team. He focuses on development and implementation of data strategy solutions for FRG, leveraging previous experience launching new Digital products and reengineering operational models as a Digital Technology platform owner and program lead in financial services.

 

Economic Impact Analysis for Credit Unions

In a recent webinar I participated in with SAS we discussed Economic Impact Analysis (EIA). While EIA is similar in concept to stress testing, its main goal is to allow credit unions to move quickly to evaluate economic changes to their portfolio—such as those brought about by a crisis like the COVID-19 pandemic.

There are four main components to EIA.

  1. Portfolio data: At a minimum this needs to be segment level with loss performance through time. If needed, this data could be augmented with external data
  2. Scenarios: Multiple economic possibilities are necessary to help assess timing and magnitude of potential, future loss
  3. Models or methodologies: These are required to link scenarios to the portfolio to forecast loss amounts
  4. Visualization of results: This is essential to clearly understand the portfolio loss behavior. While tables are useful, nothing illustrates odd behavior better than a picture (e.g., a box plot or tree map or histogram).

A credit union looking for a practical approach for getting started should consider the following steps:

  • Start with segment level data instead of account level. This should reduce the common complexities that arise when sourcing and cleaning account level data.
  • Develop segment level models or methodologies to capture the impacts of macroeconomic changes.  These can be simple provided they incorporate relationships to macroeconomic elements.
  • Create multiple scenarios. The more the better. Different scenarios will provide different insights in how the portfolio reacts to changing macroeconomic environments.
  • Execute models and explore results. This is where (I believe) the fun begins. Be curious – change portfolio assumptions (e.g., growth or run-off), and scenarios, to see how losses will react.

Now is the time to act, to gain an understanding about the economy’s impact on one’s portfolio. But it is worth mentioning this is also an investment into the future. As mentioned earlier, EIA has its roots in stress testing. By creating an EIA process now, a credit union not only better positions itself to build a robust stress test platform but also has the foundation to tackle CECL.

To view the webinar on demand, please visit NAFCU.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, and technical communication. He has over 15 years’ experience in the area of financial risk.

Avoiding Bureaucratic Phrasing

Employees developed plans during the course of the project in order to create a standardized process with respect to regulation guidelines.

Did you understand that sentence reading through the first time? The sentence is filled with bureaucratic phrasing which makes the information more complex than necessary.

In the workplace, “bureaucratic” means involving complicated rules and processes that make something unnecessarily slow and difficult. People tend to use this style of phrasing because they believe there is permanence in writing. Say something and it’s gone, but write it down and it’s with us forever.

When people believe their writing is out there for all to see, they want to sound as professional and as knowledgeable as possible. But adding bureaucratic language isn’t the best way to sound like an expert. Many complex phrases read better when they are stripped down into simple words. For example, in the original sentence above, “in order to” can be reduced to “to” and “during the course of” can be simplified to “during”:

Employees developed plans during the project to create a standardized process with respect to regulation guidelines.

Using bureaucratic phrasing can make readers feel inadequate and indirectly exclude them from the conversation. This is why using plain, straightforward language in your writing is recommended instead.

The key is learning how to turn those overly complex phrases into simple words that mean the same thing. Here are some examples:

Bureaucratic PhraseSimple Word / Phrase
Along the lines ofLike
As of this dateYet, still, or now
At all timesAlways
Due to the fact thatBecause
Concerning the matter ofAbout
For the purpose ofFor, to
In spite of the fact thatAlthough

One guideline is to avoid words and phrases that you would not use in everyday speech. You would never say, “May I look over your paper in the near future in order to review it?” Why write it?

The goal of any documentation, whether it be a technical design document or an email, is to state your main point in a simple manner. Your readers should be able to easily find important information, quickly understand what information they should store for later use, and immediately recognize what is being asked of them. Avoiding bureaucratic phrasing can help you accomplish this.

Resources:

  • Hopkins, Graham. Why bureaucratic jargon is just a pompous waste of words. 12 Sept. 2000. The Guardian.
  • Richard Johnson-Sheehan. “Technical Communication Today: Special Edition for Society for Technical Communication Foundation Certification”. Fifth Edition.

Samantha Zerger, business analytics consultant with FRG, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

RELATED:

Improving Business Email

 

How COVID-19 Could Affect Private Capital Investors

A new blog by Preqin explores what COVID-19 could mean for private capital investors.

FRG and Preqin, an industry-leading provider of data, analytics and insights for the alternative assets community, partnered to develop a novel cash flow prediction model. The model is guided by FRG’s innovative methodology and powered by Preqin’s fund-level cash flow data.

Analysts used this tool in conjunction with the release of FRG’s Pandemic Economic Scenario to assess the impact of a recession triggered by the novel coronavirus on capital calls, distributions and net cash flows.

In the blog, Preqin’s Jonathon Furer examines an analysis created by FRG.  Jonathon explores the pandemic’s effect focused on 2017-2019 vintage funds, which represent 72% of the $2.63tn in callable dry powder that the private capital industry has raised since 2000. “Assuming the global economy undergoes a significant but brief recession, and then recovers, our model suggests GPs will respond in two stages,” Furer writes.

Read about the projected stages in the full analysis, Why COVID-19 Means Investors Should Expect Lower Capital Calls and Distributions in 2020.

FRG has 20+ years of experience applying stress testing to portfolios for banks and asset allocators. We developed this unique model enabling investors to stress test private capital portfolios for a wide range of macroeconomic shocks. We are ready to help investors looking to better understand portfolio dynamics for capital planning and pacing, or risk control for a black swan event.

Download the Pandemic Economic Scenario or get in contact with Preqin at info@preqin.com for the most accurate private capital cash flow forecasting model.

If FRG can help you better understand the effects of macroeconomic shocks on your private capital portfolios, contact us at info@frgrisk.com.

 

 

 

 

 

Is a Pandemic Scenario Just a Recession Scenario?

Recently, I wrote about how a pandemic might be a useful scenario to have for scenario analysis. As I thought about how I might design such a scenario I considered: should I assume a global recession for the pandemic scenario?

A pandemic, by definition, is an outbreak of a disease that affects people around the globe. Therefore, it is reasonable to think that it would slow the flow of goods and services through the world. Repercussions would be felt everywhere – from businesses reliant on tourism and travel to companies dependent on products manufactured in countries hit the hardest.

For an initial pass, using a recession seems sensible. However, I believe this “shortcut” omits a key trait needed for scenario development: creativity.

The best scenarios, I find, come from brainstorming sessions. These sessions allow challenges to be made to status quo and preconceptions. They also help identify risk and opportunity.

To immediately consider a recession scenario as “the pandemic scenario,” then, might not be advantageous in the long run.

As an exercise, I challenged myself to come up with questions that aren’t immediately answered when assuming a generic recession. Some that I asked were:

  • How do customers use my business? Do they need to be physically present to purchase my goods or use my services?
  • How will my business be impacted if my employees are not able to come into work?
  • What will happen to my business if there is a temporary shortage of a product I need? What will happen if there is a drawn-out shortage?
  • How dependent is my business on goods and services provided by other countries? Do these countries have processes in place to contain or slow down the spread of the disease?
  • Does my business reside in a region of the country that makes it more susceptible to the impact of a pandemic (e.g., ports, major airports, large manufacturing)?
  • How are my products and services used by other countries?
  • How can my company use technology to mitigate the impacts of a pandemic?
  • Is there a difference in the impact to my company if the pandemic is slow moving versus fast moving?

These are just a few of the many questions to consider for this analysis. Ultimately, the choice of whether to use a recession or not rests with the scenario development committee. To make the most informed decision, I would urge the committee to make questions like these a part of the discussion rather than taking the “shortcut” approach.

Jonathan Leonardelli, FRM, Director of Business Analytics for FRG, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.


RELATED:

Do You Have a Pandemic Scenario?

VOR Scenario Builder: Obtaining Business Insight From Scenarios

 

Subscribe to our blog!