CECL—Questions to Consider When Selecting Loss Methodologies

Paragraph 326-20-30-3 of the Financial Accounting Standards Board (FASB) standards update[1] states: “The allowance for credit losses may be determined using various methods”. I’m not sure if any statement, other than “We need to talk”, can be as fear inducing. Why is it scary? Because in the world of details and accuracy, this statement is remarkably vague and not prescriptive.

Below are some questions to consider when determining the appropriate loss methodology approaches for a given segment.

How much history do you have?

If a financial institution (FI) has limited history[2] then the options available to them are, well, limited. To build a model one needs sufficient data to capture the behavior (e.g., performance or payment) of accounts. Without enough data the probability of successfully building a model is low. Worse yet, even if one builds a model, the likelihood of it being useful and robust is minimal. As a result, loss methodology approaches that do not need a lot of data should be considered (e.g., discount cashflow or a qualitative factor approach based on industry information).

Have relevant business definitions been created?

The loss component approach (decomposing loss into PD, LGD, and EAD) is considered a leading practice at banks[3]. However, in order to use this approach definitions of default and, arguably, paid-in-full, need to be created for each segment being modeled. (Note: these definitions can be the same or different across segments.) Without these definitions, one does not know when an account has defaulted or paid-off.

Is there a sufficient number of losses or defaults in the data?

Many of the loss methodologies available for consideration (e.g., loss component or vintage loss rates) require enough losses to discern a pattern. As a result, banks that are blessed with infrequent losses can feel cursed when they try to implement one of those approaches. While low losses do not necessarily rule out these approaches, it does make for a more challenging process.

Are loan level attributes available, accurate, and updated appropriately?

This question tackles the granularity of an approach instead of an approach itself. As mentioned in the post CECL – Data (As Usual) Drives Everything [link], there are three different data granularity levels a model can be built on. Typically, the decision is between loan-level versus segment level. Loan-level models are great for capturing sensitivities to loan characteristics and macroeconomic events provided the loan characteristics are accurate and updated (if needed) on a regular interval.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1]FASB accounting standards update can be found here

[2] There is no consistent rule, at least that I’m aware of, that defines “limited history”. That said, we typically look for clean data reaching back through an economic cycle.

[3] See: Capital Planning at Large Bank Holding Companies: Supervisory Expectations and Range of Current Practice August 2013

RELATED:

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

CECLData (As Usual) Drives Everything

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

I don’t know about you, but I find caterpillars to be a bit creepy[1]. On the other hand, I find butterflies to be beautiful[2]. Oddly enough, this aligns to my views on the different stages of data in relation to model development.

As a financial institution (FI) prepares for CECL, it is strongly suggested (by me at least) to know which stage the data falls into. Knowing its stage provides one with guidance on how to proceed.

The Ugly

At FRG we use the term dirty data to describe data that is ugly. Dirty data typically has these following characteristics (the list is not comprehensive):

  • Unexplainable missing values: The key word is unexplainable. Missing values can mean something (e.g., a value has not been captured yet) but often they indicate a problem. See this article for more information.
  • Inconsistent values: For example, a character variable that holds values for state might have Missouri, MO, or MO. as values. A numeric variable for interest rate might have a value as a percent (7.5) and a decimal (0.075)
  • Poor definitional consistency: This occurs when a rule that is used to classify some attribute of an account changes during history. For example, at one point in history a line of credit might be indicated by a nonzero original commitment amount, but at a different point it might be indicated by whether a revolving flag is non-missing.
The Transition

You should not model or perform analysis using dirty data. Therefore, the next step in the process is to transition dirty data into clean data.

Transitioning to clean data, as the name implies, requires scrubbing the information. The main purpose of this step is to address the issues identified in the dirty data. That is, one would want to fix missing values (e.g., imputation), standardized variable values (e.g., all states are identified by a two-character code), and correct inconsistent definitions (e.g., a line indicator is always based on nonzero original commitment amount).

The Beautiful

A final step must be taken before data can be used for modeling. This step takes clean data and converts it to model-ready data.

At FRG we use the term model-ready to describe clean data with the application of relevant business definitions. An example of a relevant business definition would be how an FI defines default[3]. Once the definition has been created the corresponding logic needs to be applied to the clean data in order to create, say, a default indicator variable.

Just like a caterpillar metamorphosing to a butterfly, dirty data needs to morph to model-ready for an FI to enjoy its true beauty. And, only then, can an FI move forward on model development.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] Yikes!

[2] Pretty!

[3] E.g., is it 90+ days past due (DPD) or 90+ DPD or in bankruptcy or in non-accrual or …?

 

RELATED:

CECL—Questions to Consider When Selecting Loss Methodologies

CECLData (As Usual) Drives Everything

The Importance of Technical Communication

This is the introduction to a new blog series, The Importance of Technical Communication, which will focus on topics such as verbal and written communication, workplace etiquette, and teamwork in the workplace.

Soft skills, as a general term, include interpersonal skills, leadership, dependability, willingness to learn, and effective communication skills that can be used in any career. These are known by sociologists and anthropologists as skills that are generally required to become a functioning member of society. But, it seems that there are many articles pointing out a lack of these soft skills among college graduates and stating it as a main reason why many cannot get hired. Some headlines include:

Results from a survey by the Workforce Solutions Group at St. Louis Community College regard these deficiencies specifically as applicant shortcomings. In the St. Louis regional survey, it states that poor work habits, lack of critical thinking and problem solving skills, lack of teamwork or collaboration, and lack of communication or interpersonal skills rank the highest in applicant shortcomings within both technology and finance domains.

 TechnologyFinance
Poor work habits56%50%
Lack of critical thinking and problem solving skills44%50%
Lack of teamwork or collaboration49%43%
Lack of communication or interpersonal skills58%38%
Table 1: Applicant Shortcomings – 2018 State of St. Louis Workforce Report to the Region

In today’s society, with tools at our fingertips, communication is key. In the workplace, interpersonal skills are needed at a rapid, daily pace. Often other workplace issues, such as lack of collaboration skills, arise from communication issues. Given these alarming statistics, how do we, in the technology and finance domain, encourage the improvement of these skills within our companies and deal with applicants who lack them? This blog series will discuss these questions and provide tips on how to correctly technically communicate in the workplace.

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

CECL – Data (As Usual) Drives Everything

To appropriately prepare for CECL a financial institution (FI) must have a hard heart-to-heart with itself about its data. Almost always, simply collecting data in a worksheet, reviewing it for gaps, and then giving it the thumbs up is insufficient.

Data drives all parts of the CECL process. The sections below, by no means exhaustive, provide key areas where your data, simply being by your data, constrains your options.

Segmentation

Paragraph 326-20-30-2 of the Financial Accounting Standards Board (FASB) standards update[1] states: “An entity shall measure expected credit losses of financial assets on a collective (pool) basis when similar risk characteristic(s) exist.” It then points to paragraph 326-20-55-5 which provides examples of risk characteristics, some of which are: risk rating, financial asset type, and geographical location.

Suggestion: prior to reviewing your data consider what risk profiles are in your portfolio. After that, review your data to see if it can adequately capture those risk profiles. As part of that process consider reviewing:

  • Frequency of missing values in important variables
  • Consistency in values of variables
  • Definitional consistency[2]
Methodology Selection

The FASB standard update does not provide guidance as to which methodologies to use[3]. That decision is entirely up to the FI[4]. However, the methodologies that are available to the FI are limited by the data it has. For example, if an FI has limited history then any of the methodologies that are rooted in historical behavior (e.g., vintage analysis or loss component) are likely out of the question.

Suggestion: review the historical data and ask yourself these questions: 1) do I have sufficient data to capture the behavior for a given risk profile?; 2) is my historical data of good quality?; 3) are there gaps in my history?

Granularity of Model

Expected credit loss can be determined on three different levels of granularity: loan, segment (i.e., risk profile), and portfolio. Each granularity level has a set of pros and cons but which level an FI can use depends on the data.

Suggestion: review variables that are account specific (e.g., loan-to-value, credit score, number of accounts with institution) and ask yourself: are the sources of these variables reliable? Do they get refreshed often enough to capture changes in customer or macroeconomic environment behavior?

Hopefully, this post has started you critically thinking about your data. While data review might seem daunting, I cannot stress enough—it’s needed, it’s critical, it’s worth the effort.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] You can find the update here

[2] More on what these mean in a future blog post

[3] Paragraph 326-20-30-3

[4] A future blog post will cover some questions to ask to guide in this decision.

 

RELATED:

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

Avoiding Discrimination in Unstructured Data

An article published by the Wall Street Journal on Jan. 30, 2019  got me thinking about the challenges of using unstructured data in modeling. The article discusses how New York’s Department of Financial Services is allowing life insurers to use social media, as well as other nontraditional sources, to set premium rates. The crux: the data cannot unfairly discriminate.  

I finished the article with three questions on my mind. The first: How does a company convert unstructured data into something useful? The article mentions that insurers are leveraging public information – like motor vehicle records and bankruptcy documents – in addition to social media. Surely, though, this information is not in a structured format to facilitate querying and model builds.

Second: How does a company ensure the data is good quality? Quality here doesn’t only mean the data is clean and useful, it also means the data is complete and unbiased. A lot of effort will be required to take this information and make it model ready. Otherwise, the models will at best provide spurious output and at worst provide biased output.

The third: With all this data available what “new” modeling techniques can be leveraged? I suspect many people read that last sentence and thought AI. That is one option. However, the key is to make sure the model does not unfairly discriminate. Using a powerful machine learning algorithm right from the start might not be the best option. Just ask Amazon about its AI recruiting tool.[1]

The answers to these questions are not simple, and they do require a blend of technological aptitude and machine learning sophistication. Stay tuned for future blog posts as we provide answers to these questions.

 

[1] Amazon scraps secret AI recruiting tool that showed bias against women

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

Does the Liquidity Risk Premium Still Exist in Private Equity?

FRG has recently been investigating the dynamics of the private capital markets.  Our work has led us to a ground-breaking product designed to help allocators evaluate potential cash flows, risks, and plan future commitments to private capital.  You can learn more here and read about our modeling efforts in our white paper, “Macroeconomic Effects On The Modeling of Private Capital Cash Flows.”

As mentioned in a previous post, we are investigating the effects of available liquidity in the private capital market.  This leads to an obvious question: Does the Liquidity Risk Premium Still Exist in Private Equity?

It is assumed by most in the space that the answer is “Yes.”  Excess returns provided by private funds are attributable to reduced liquidity.  Lock up periods of 10+ years allow managers to find investments that would not be possible otherwise.  This premium is HIGHLY attractive in a world of low rates and cyclically high public equity valuations.  Where else can a pension or endowment find the rates of return required?

If the answer is, “No,” then Houston, we have a problem.  Money continues to flow into PE at a high rate.  A recent article in the FT (quoting data from FRG partner Preqin) show there is nearly $1.5 trillion in dry powder.  Factoring in leverage, there could be, in excess of, $5 trillion in capital waiting to be deployed.  In the case of a “No” answer, return chasing could have gone too far, too fast.

As mentioned, leverage in private capital funds is large and maybe growing larger.  If the liquidity risk premium has been bid away, what investors are left with may very well be just leveraged market risk.  What is assumed to be high alpha/low beta, might, in fact, be low alpha/high beta.  This has massive implications for asset allocation.

We are attempting to get our heads around this problem in order to help our clients understand the risk associated with their portfolios.

 

Dominic Pazzula is a Director with the Financial Risk Group specializing in asset allocation and risk management.  He has more than 15 years of experience evaluating risk at a portfolio level and managing asset allocation funds.  He is responsible for product design of FRG’s asset allocation software offerings and consults with clients helping to apply the latest technologies to solve their risk, reporting, and allocation challenges.

 

 

 

 

 

Private Equity and Debt Liquidity, the “Secondary” Market

A significant consideration in several aspects of Private Equity and Private Debt has been attributed to the liquidity (or lack thereof) of these investments.  The liquidity factor has been cited as a basic investment decision, influencing complex pricing, return of investment and financial risk management.  But as the environment has changed and matured, is liquidity being considered as it should be?

FRG’s ongoing research suggests that some of the changes this asset class are experiencing may be attributable to changes in the liquidity profile of these investments, which in turn may affect asset management decisions.  As modeling techniques continue to evolve in the asset management space, illustrated in our recent paper Macroeconomic Effects On The Modeling of Private Capital Cash Flows, their use as both an asset management tool and a risk management tool become more valuable.

The extreme importance placed on liquidity risk for all types of financial investments, and the financial community in general, to this point in time have been primarily associated with public investments.  However, a burgeoning “secondary” market in Private Equity and Private Debt will change the liquidity consideration of this asset class, a better understanding of which is necessary for investment managers active in this space.  Achieving this understanding will in turn provide private equity and private debt investment managers with another perspective with which to assess management decision aligning a bit more with that traditionally available for public investments. FRG is refining research into the liquidity of Private Capital investments through an appreciation of the dynamics of the environment to provide a better understanding of the behavior of these investments. Watch for more from us on this intriguing subject.

Read more about FRG’s work in Private Capital Forecasting via the VOR platform.

Dr. Jimmie Lenz is a Principal with the Financial Risk Group and teaches Finance at the University of South Carolina.  He has 30 years of experience in financial services, including roles as Chief Risk Officer, Chief Credit Officer, and Head of Predictive Analytics at one of the largest brokerage firms and Wealth Management groups in the U.S.

Change in CECL Approved by the FDIC

The Federal Deposit Insurance Corporation (FDIC) approved a measure that will allow a three-year phase in of the impact of CECL on regulatory capital yesterday (12/18/18). This change will also delay the impact on bank stress tests until 2020.  The change does not affect the rule itself but now allows banks the option to phase in impacts of CECL on regulatory capital over a three-year period. The details of this change can be found in the FDIC memorandum released yesterday.  The memorandum also adjusts how reserves for “bad loans” will be accounted for in regulatory capital.

The Financial Risk Group is recommending that banks utilize this time to better understand the impact, and the opportunities, that result from the mandated changes. “Time to implementation has been a limiting factor for some institutions to explore the identification of additional stakeholder value, but this should no longer be the case,” stated John Bell, FRG’s managing partner. FRG has (and is currently) partnered with clients of all types on a number of assessments and implementations of CECL.  The lessons to date regarding CECL are available in a number of our publications, including: CECL-Considerations, Developments, and Opportunities and Current Expected Credit Loss-Why The Expectations Are Different.

IFRS 17: Killing Two Birds

Time is ticking for the 450 insurers around the world to comply with the International Financial Reporting Standard 17 (IFRS 17) by January 1, 2021 for companies with their financial year starting on January 1.

Insurers are at different stages of preparation, ranging from performing gap analyses, to issuing requirements to software and consulting vendors, to starting the pilot phase with a new IFRS 17 system, with a few already embarking on implementing a full IFRS 17 system.

Unlike the banks, the insurance industry has historically spent less on large IT system revamps. This is in part due to the additional volume, frequency and variety of banking transactions compared to insurance transactions.

IFRS 17 is one of the biggest ‘people, process and technology’ revamp exercises for the insurance industry in a long while. The Big 4 firms have published a multitude of papers and videos on the Internet highlighting the impact of the new reporting standard on insurance contracts that was issued by the IASB in May 2017. In short, it is causing a buzz in the industry.

As efforts are focused on ensuring regulatory compliance to the new standard, insurers must also ask: “What other strategic value can be derived from our heavy investment in time, manpower and money in this whole exercise?”

The answer—analytics to gain deeper business insights.

One key objective of IFRS 17 is to provide information at a level of granularity that helps stakeholders assess the effect of insurance contracts on financial position, financial performance and cash flows, increasing transparency and comparability.

Most IFRS 17 systems in the market today achieves this by bringing the required data into the system, compute, report and integrate to the insurer’s GL system. From a technology perspective, such systems will comprise a data management tool, a data model, a computation engine and a reporting tool. However, most of these systems are not built to provide strategic value beyond pure IFRS 17 compliance.

Apart from the IFRS 17 data, an insurer can use this exercise to put in place an enterprise analytics platform that caters beyond IFRS 17 reporting, to broader and deeper financial analytics, to customer analytics, operational and risk analytics. To leverage on new predictive analytics technologies like machine learning and artificial intelligence, a robust enterprise data platform to house and make available large volumes of data (big data) is crucial.

Artificial Intelligence can empower important processes like claims analyses, asset management, risk calculation, and prevention. For instance, better forecasting of claims experience based on a larger variety and volume of real-time data. The same machine can be used to make informed decisions about investments based on intelligent algorithms, among other use cases.

As the collection of data becomes easier and more cost effective, Artificial Intelligence can drive whole new growths for the insurance industry.

The key is centralizing most of your data onto a robust enterprise platform to allow cross line of business insights and prediction.

As an insurer, if your firm has not embarked on such a platform, selecting a robust system that can cater to IFRS 17 requirements AND beyond will be a case of killing 2 birds with one stone.

FRG can help you and your teams get ready for IFRS 17.  Contact us today for more information.

Tan Cheng See is Director of Business Development and Operations for FRG.

Top 6 Things To Consider When Creating a Data Services Checklist

“Data! Data! Data! I can’t make bricks without clay.”
— Sherlock Holmes, in Arthur Conan Doyle’s The Adventure of the Copper Beeches

You should by now have a solid understanding of the growth of and history of data, data challenges and how to effectively manage themwhat data as a service (DaaS) is, how to optimize data using both internal and  external data sources, and the benefits of using DaaS. In our final post of the series, we will discuss the top six things to consider when creating a Data Services strategy.

Let’s break this down into two sections: 1) pre-requisites and 2) the checklist.

Prerequisites

We’ve identified four crucial points below to consider prior to starting your data services strategy. These will help frame and pull together the sections of information needed to build a comprehensive strategy to move your business towards success.

Prerequisites:

1: View data as a strategic business asset

 In the age of data regulation including BCBS 239 principles for effective risk data aggregation and risk reporting, GDPR and others, data, especially that relating to an individual, is considered an asset that must be managed and protected. It also can be aggregated, purchased, traded and legally shared to create bespoke user experiences and engage in more targeted business decisions. Data must be classified and managed with the appropriate level of governance in the same vein as other assets, such as people, processes and technology. Being in this mindset and appreciating the value of data and recognizing that not all data is alike and must be manged appropriately will ultimately ensure business success in a data-driven world.

2: Ensure executive buy-in, senior sponsorship and support

As with any project, having executive buy-in is required to ensure top down adoption. However, partnering with business line executives who create data and are power users of it can help champion its proper management and reuse in the organization. This assists in achieving goals and ensuring project and business success. The numbers don’t lie: business decisions should be driven by data.

3: Have a defined data strategy and target state that supports the business strategy

Having data for the sake of it won’t provide any value; rather, a clearly-defined data strategy and target state which outlines how data will support the business will allow for increased user buy in and support. This strategy must include and outline:

  • A Governance Model
  • An Organization chart with ownership, roles and responsibility, and operations; and
  • Goals for data accessibility and operations (or data maturity goals)

If these sections are not agreed from the start, uncertainty, overlapping responsibilities, duplication of data and efforts as well as regulatory or potentially legal issues may arise.

4: Have a Reference Data Architecture to Demonstrate where Data Services Fit

Understanding the architecture that supports data and data maturity goals, including the components that are required to support the management of data from acquisition through distribution and retirement is critical. It is also important to understand how they fit into the overall architecture and infrastructure of the technology at the firm.  Defining a clear data architecture and its components including:

  • Data model(s)
  • Acquisition
  • Access
  • Distribution
  • Storage
  • Taxonomy

are required prior to integration of the data.

5. Data Operating Model – Understanding how the Data Transverses the Organization

It is crucial to understand the data operations and operating model – including who does what to the data and how the data ownership changes over time or transfers among owners. Data lineage is key – where your data came from, its intended use, who has/is allowed to access it and where it goes inside or outside the organization – to keep it clean and optimize its use. Data quality tracking, metrics and remediation will be required.

Existing recognized standards such as the Global Legal Entity Identifier (LEI) that can be acquired and distributed via data services can help in the sharing and reuse of data that is ‘core’ to the firm. They can also assist in tying together data sets used across the firm.

Checklist/Things to Consider

Once you’ve finished the requirements gathering and understand the data landscape, including roles and responsibilities described above, you’re now ready to begin putting together your data services strategy. To build an all-encompassing strategy, the experts suggest inclusion of the following.

1:  Defined Data Services Required

  •  Classification: core vs. business shared data and ownership
    • Is everyone speaking a common language?
    • What data is ‘core’ to the business, meaning it will need to be commonly defined and used across the organization?
    • What data will be used by a specific business that may not need to be uniformly defined?
    • What business-specific data will be shared across the organization, which may need to be uniformly defined and might need more governance?
  • Internal vs external sourcing
    • Has the business collected or created the data themselves or has it been purchased from a 3rd party? Are definitions, metadata and business rules defined?
    • Has data been gathered or sourced appropriately and with the correct uniform definitions, rules, metadata and classification, such as LEI?
  • Authoritative Data Sources for the Data Services
    • Have you documented where, from whom, when etc. the data was gathered (from Sources of Record or Sources of Origin)? For example, the Source of Origin might be a trading system, an accounting system or a payments system. The general ledger might be the Source of Record for positions.
    • Who is the definitive source (internal/external)? Which system?
  • Data governance requirements
    • Have you adhered to the proper definitions, rules, and standards set in order to handle data?
    • Who should be allowed to access the data?
    • Which applications (critical, usually externally facing) applications must access the data directly?
  • Data operations and maintenance
    • Have you kept your data clean and up to date?
    • Are you up to speed with regulations, such as GDPR, and successfully obtained explicit consent for the information?
    • Following your organization chart and rules and requirements detailed above, are the data owners known, informed and understand they are responsible for making sure their data maintains its integrity?
    • Are data quality metrics monitored with a process to correct data issues?
    • Do all users with access to the data know who to speak to if there is a data quality issue and know how to fix it?
  • Data access, distribution and quality control requirements
    • Has the data been classified properly? Is it public information? If not, is it restricted to those who need it?
    • Have you defined how you share data between internal/external parties?
    • Have the appropriate rules and standards been applied to keep it clean?
    • Is there a clearly defined process for this?
  • Data integration requirements
    • If the data will be merged with other data sets/software, have the data quality requirements been met to ensure validity?
    • Have you prioritized the adoption of which applications must access the authoritative data distributed via data services directly?
    • Have you made adoption easy – allowing flexible forms of access to the same data (e.g., via spreadsheets, file transfers, direct APIs, etc.)?

2: Build or Acquire Data Services

 To recap, are you building or acquiring your own Data Services? Keep in mind the following must be met and adhere to compliance:

  • Data sourcing and classification, assigning ownership
  • Data Access and Integration
  • Proper Data Services Implementation, access to authoritative data
  • Proper data testing, and data remediation, keeping the data clean
  • Appropriate access control and distribution of the data, flexible access
  • Quality control monitoring
  • Data issue resolution process

The use and regulations around data will be constantly evolving as will the number of users data can support in business ventures. We hope that this checklist will provide a foundation towards building and supporting your organization’s data strategies. If there are any areas you’re unclear on, don’t forget to take a look back through our first five blogs which provide more in-depth overviews on the use of data services to support the business.

Thank you for tuning into our first blog series on data management. We hope that you found it informative but most importantly useful towards your business goals.

If you enjoyed our blog series or have questions on the topics discussed, write to us on Twitter@FRGRISK.

Dessa Glasser is a Principal with the Financial Risk Group, and an independent board member of Oppenheimer & Company, who assists Virtual Clarity, Ltd. on data solutions as an Associate. 

 

RELATED:

Data Is Big, Did You Know?

Data Management – The Challenges

Data as a Service (DaaS) Solution – Described

Data as a Service (DaaS) Data Sources – Internal or External?

Data as a Service (DaaS) – The Benefits

Subscribe to our blog!