Improving Business Email Etiquette

This is the second post in an occasional series about the importance of technical communication in the workplace.

According to The Radicati Group, Inc., based on a worldwide study in 2015, the number of business emails sent and received per user, per day totals 122, with a circulation of 112.5 billion worldwide. These statistics should reflect how much businesses rely on email communication skills on a daily basis. Because of the massive influx of emails, any employee at your workplace could most likely list three pet peeves of theirs regarding email communication. The following are the answers I got from a few FRG employees:

  • Emails that have a missing subject line or have no content
  • Emails that do not have a clear response to your question
  • Emails that do not get to the point quickly or are superfluous

How do we ensure that we are not the employees that are sending the above types of emails? How do we ensure that we are taking advantage of this easy communication tool to be efficient, productive, and constructive in the workplace? How do we ensure that we are communicating in a professional manner?

Follow these rules (in no particular order) on email etiquette to make sure you are sending correct and understandable information.

  1. Keep it simple. Use succinct sentences that get promptly to the point.
  2. Be professional. If you are not positive the receiver of the email knows who you are, briefly introduce yourself (e.g., state your name, job title, and purpose of email).
  3. Make it standalone. Suspect that the person did not read previous emails in the thread. Refresh their memory first on what the discussion was and then continue.
  4. Read the entire email before sending. Ensure that there are no typos and that the content makes sense.
  5. Make no assumptions. Do not assume that others understand what you are saying. Be clear in your statements/questions.
  6. Be consistent. Include a clear and intuitive subject and body content. Ensure that terms are being referenced the same in email threads to avoid confusion (e.g., Financial Risk Group vs. FRG).
  7. Always consider lists. Use bulleted lists to directly group lists, steps, questions, etc. Use numerical or alphabetical lists for items that need to be in a specific order and bullets for items that do not.
  8. Use parallel structure. Construct sentences so that readers can understand difficult concepts more quickly.
    • Parallel structure is especially important when writing lists. Begin each statement with the same part of speech. For example, if explaining steps in a process, use verbs such as type, click, or close to begin each statement.
    • Parallel structure can be used in comparisons. Repeat the same phrases in order to be clear. For example, the new user interface is more user-friendly than the old user interface.
    • Parallel structure can help define the format and/or layout. Repeat the same format and/or layout to ensure consistent organization. For example, if you include a bolded header for one topic, use a bolded header for each topic.

The above rules can be applied to emails sent to any reader, whether it be a co-worker, boss, client, future employer, etc. It is ultimately important to send clear, understandable statements and questions to ensure you receive a productive and expected response.

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

CECL – The Power of Vintage Analysis

I would argue that a critical step in getting ready for CECL is to review the vintage curves of the segments that have been identified. Not only do the resulting graphs provide useful information but the process itself also requires thought on how to prepare the data.

Consider the following graph of auto loan losses for different vintages of Not-A-Real-Bank bank[1]:

 

While this is a highly-stylized depiction of vintage curves, its intent is to illustrate what information can be gleaned from such a graph. Consider the following:

  1. A clear end to the seasoning period can be determined (period 8)
  2. Outlier vintages can be identified (2015Q4)
  3. Visual confirmation that segmentation captures risk profiles (there aren’t a substantial number of vintages acting odd)

But that’s not all! To get to this graph, some important questions need to be asked about the data. For example:

  1. Should prepayment behavior be captured when deriving the loss rates? If so, what’s the definition of prepayment?
  2. At what time period should the accumulation of losses be stopped (e.g., contractual term)?
  3. Is there enough loss[2] behavior to model on the loan level?
  4. How should accounts that renew be treated (e.g., put in new vintage)?

In conclusion, performing vintage analysis is more than just creating a picture with many different colors. It provides insight into the segments, makes one consider the data, and, if the data is appropriately constructed, positions one for subsequent analysis and/or modeling.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] Originally I called this bank ACME Bank but when I searched to see if one existed I got this, this, and this…so I changed the name. I then did a search of the new name and promptly fell into a search engine rabbit hole that, after a while, I climbed out with the realization that for any 1 or 2 word combination I come up with, someone else has already done the same and then added bank to the end.

[2] You can also build vintage curves on defaults or prepayment.

 

RELATED:

CECL—Questions to Consider When Selecting Loss Methodologies

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

CECLData (As Usual) Drives Everything

CECL—Questions to Consider When Selecting Loss Methodologies

Paragraph 326-20-30-3 of the Financial Accounting Standards Board (FASB) standards update[1] states: “The allowance for credit losses may be determined using various methods”. I’m not sure if any statement, other than “We need to talk”, can be as fear inducing. Why is it scary? Because in the world of details and accuracy, this statement is remarkably vague and not prescriptive.

Below are some questions to consider when determining the appropriate loss methodology approaches for a given segment.

How much history do you have?

If a financial institution (FI) has limited history[2] then the options available to them are, well, limited. To build a model one needs sufficient data to capture the behavior (e.g., performance or payment) of accounts. Without enough data the probability of successfully building a model is low. Worse yet, even if one builds a model, the likelihood of it being useful and robust is minimal. As a result, loss methodology approaches that do not need a lot of data should be considered (e.g., discount cashflow or a qualitative factor approach based on industry information).

Have relevant business definitions been created?

The loss component approach (decomposing loss into PD, LGD, and EAD) is considered a leading practice at banks[3]. However, in order to use this approach definitions of default and, arguably, paid-in-full, need to be created for each segment being modeled. (Note: these definitions can be the same or different across segments.) Without these definitions, one does not know when an account has defaulted or paid-off.

Is there a sufficient number of losses or defaults in the data?

Many of the loss methodologies available for consideration (e.g., loss component or vintage loss rates) require enough losses to discern a pattern. As a result, banks that are blessed with infrequent losses can feel cursed when they try to implement one of those approaches. While low losses do not necessarily rule out these approaches, it does make for a more challenging process.

Are loan level attributes available, accurate, and updated appropriately?

This question tackles the granularity of an approach instead of an approach itself. As mentioned in the post CECL – Data (As Usual) Drives Everything, there are three different data granularity levels a model can be built on. Typically, the decision is between loan-level versus segment level. Loan-level models are great for capturing sensitivities to loan characteristics and macroeconomic events provided the loan characteristics are accurate and updated (if needed) on a regular interval.

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1]FASB accounting standards update can be found here

[2] There is no consistent rule, at least that I’m aware of, that defines “limited history”. That said, we typically look for clean data reaching back through an economic cycle.

[3] See: Capital Planning at Large Bank Holding Companies: Supervisory Expectations and Range of Current Practice August 2013

RELATED:

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

CECLData (As Usual) Drives Everything

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

I don’t know about you, but I find caterpillars to be a bit creepy[1]. On the other hand, I find butterflies to be beautiful[2]. Oddly enough, this aligns to my views on the different stages of data in relation to model development.

As a financial institution (FI) prepares for CECL, it is strongly suggested (by me at least) to know which stage the data falls into. Knowing its stage provides one with guidance on how to proceed.

The Ugly

At FRG we use the term dirty data to describe data that is ugly. Dirty data typically has these following characteristics (the list is not comprehensive):

  • Unexplainable missing values: The key word is unexplainable. Missing values can mean something (e.g., a value has not been captured yet) but often they indicate a problem. See this article for more information.
  • Inconsistent values: For example, a character variable that holds values for state might have Missouri, MO, or MO. as values. A numeric variable for interest rate might have a value as a percent (7.5) and a decimal (0.075)
  • Poor definitional consistency: This occurs when a rule that is used to classify some attribute of an account changes during history. For example, at one point in history a line of credit might be indicated by a nonzero original commitment amount, but at a different point it might be indicated by whether a revolving flag is non-missing.
The Transition

You should not model or perform analysis using dirty data. Therefore, the next step in the process is to transition dirty data into clean data.

Transitioning to clean data, as the name implies, requires scrubbing the information. The main purpose of this step is to address the issues identified in the dirty data. That is, one would want to fix missing values (e.g., imputation), standardized variable values (e.g., all states are identified by a two-character code), and correct inconsistent definitions (e.g., a line indicator is always based on nonzero original commitment amount).

The Beautiful

A final step must be taken before data can be used for modeling. This step takes clean data and converts it to model-ready data.

At FRG we use the term model-ready to describe clean data with the application of relevant business definitions. An example of a relevant business definition would be how an FI defines default[3]. Once the definition has been created the corresponding logic needs to be applied to the clean data in order to create, say, a default indicator variable.

Just like a caterpillar metamorphosing to a butterfly, dirty data needs to morph to model-ready for an FI to enjoy its true beauty. And, only then, can an FI move forward on model development.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] Yikes!

[2] Pretty!

[3] E.g., is it 90+ days past due (DPD) or 90+ DPD or in bankruptcy or in non-accrual or …?

 

RELATED:

CECL—Questions to Consider When Selecting Loss Methodologies

CECLData (As Usual) Drives Everything

The Importance of Technical Communication

This is the introduction to a new blog series, The Importance of Technical Communication, which will focus on topics such as verbal and written communication, workplace etiquette, and teamwork in the workplace.

Soft skills, as a general term, include interpersonal skills, leadership, dependability, willingness to learn, and effective communication skills that can be used in any career. These are known by sociologists and anthropologists as skills that are generally required to become a functioning member of society. But, it seems that there are many articles pointing out a lack of these soft skills among college graduates and stating it as a main reason why many cannot get hired. Some headlines include:

Results from a survey by the Workforce Solutions Group at St. Louis Community College regard these deficiencies specifically as applicant shortcomings. In the St. Louis regional survey, it states that poor work habits, lack of critical thinking and problem solving skills, lack of teamwork or collaboration, and lack of communication or interpersonal skills rank the highest in applicant shortcomings within both technology and finance domains.

 TechnologyFinance
Poor work habits56%50%
Lack of critical thinking and problem solving skills44%50%
Lack of teamwork or collaboration49%43%
Lack of communication or interpersonal skills58%38%
Table 1: Applicant Shortcomings – 2018 State of St. Louis Workforce Report to the Region

In today’s society, with tools at our fingertips, communication is key. In the workplace, interpersonal skills are needed at a rapid, daily pace. Often other workplace issues, such as lack of collaboration skills, arise from communication issues. Given these alarming statistics, how do we, in the technology and finance domain, encourage the improvement of these skills within our companies and deal with applicants who lack them? This blog series will discuss these questions and provide tips on how to correctly technically communicate in the workplace.

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

CECL – Data (As Usual) Drives Everything

To appropriately prepare for CECL a financial institution (FI) must have a hard heart-to-heart with itself about its data. Almost always, simply collecting data in a worksheet, reviewing it for gaps, and then giving it the thumbs up is insufficient.

Data drives all parts of the CECL process. The sections below, by no means exhaustive, provide key areas where your data, simply being by your data, constrains your options.

Segmentation

Paragraph 326-20-30-2 of the Financial Accounting Standards Board (FASB) standards update[1] states: “An entity shall measure expected credit losses of financial assets on a collective (pool) basis when similar risk characteristic(s) exist.” It then points to paragraph 326-20-55-5 which provides examples of risk characteristics, some of which are: risk rating, financial asset type, and geographical location.

Suggestion: prior to reviewing your data consider what risk profiles are in your portfolio. After that, review your data to see if it can adequately capture those risk profiles. As part of that process consider reviewing:

  • Frequency of missing values in important variables
  • Consistency in values of variables
  • Definitional consistency[2]
Methodology Selection

The FASB standard update does not provide guidance as to which methodologies to use[3]. That decision is entirely up to the FI[4]. However, the methodologies that are available to the FI are limited by the data it has. For example, if an FI has limited history then any of the methodologies that are rooted in historical behavior (e.g., vintage analysis or loss component) are likely out of the question.

Suggestion: review the historical data and ask yourself these questions: 1) do I have sufficient data to capture the behavior for a given risk profile?; 2) is my historical data of good quality?; 3) are there gaps in my history?

Granularity of Model

Expected credit loss can be determined on three different levels of granularity: loan, segment (i.e., risk profile), and portfolio. Each granularity level has a set of pros and cons but which level an FI can use depends on the data.

Suggestion: review variables that are account specific (e.g., loan-to-value, credit score, number of accounts with institution) and ask yourself: are the sources of these variables reliable? Do they get refreshed often enough to capture changes in customer or macroeconomic environment behavior?

Hopefully, this post has started you critically thinking about your data. While data review might seem daunting, I cannot stress enough—it’s needed, it’s critical, it’s worth the effort.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

 

[1] You can find the update here

[2] More on what these mean in a future blog post

[3] Paragraph 326-20-30-3

[4] A future blog post will cover some questions to ask to guide in this decision.

 

RELATED:

CECL—The Caterpillar to Butterfly Evolution of Data for Model Development

Avoiding Discrimination in Unstructured Data

An article published by the Wall Street Journal on Jan. 30, 2019  got me thinking about the challenges of using unstructured data in modeling. The article discusses how New York’s Department of Financial Services is allowing life insurers to use social media, as well as other nontraditional sources, to set premium rates. The crux: the data cannot unfairly discriminate.  

I finished the article with three questions on my mind. The first: How does a company convert unstructured data into something useful? The article mentions that insurers are leveraging public information – like motor vehicle records and bankruptcy documents – in addition to social media. Surely, though, this information is not in a structured format to facilitate querying and model builds.

Second: How does a company ensure the data is good quality? Quality here doesn’t only mean the data is clean and useful, it also means the data is complete and unbiased. A lot of effort will be required to take this information and make it model ready. Otherwise, the models will at best provide spurious output and at worst provide biased output.

The third: With all this data available what “new” modeling techniques can be leveraged? I suspect many people read that last sentence and thought AI. That is one option. However, the key is to make sure the model does not unfairly discriminate. Using a powerful machine learning algorithm right from the start might not be the best option. Just ask Amazon about its AI recruiting tool.[1]

The answers to these questions are not simple, and they do require a blend of technological aptitude and machine learning sophistication. Stay tuned for future blog posts as we provide answers to these questions.

 

[1] Amazon scraps secret AI recruiting tool that showed bias against women

 

Jonathan Leonardelli, FRM, Director of Business Analytics for the Financial Risk Group, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

Does the Liquidity Risk Premium Still Exist in Private Equity?

FRG has recently been investigating the dynamics of the private capital markets.  Our work has led us to a ground-breaking product designed to help allocators evaluate potential cash flows, risks, and plan future commitments to private capital.  You can learn more here and read about our modeling efforts in our white paper, “Macroeconomic Effects On The Modeling of Private Capital Cash Flows.”

As mentioned in a previous post, we are investigating the effects of available liquidity in the private capital market.  This leads to an obvious question: Does the Liquidity Risk Premium Still Exist in Private Equity?

It is assumed by most in the space that the answer is “Yes.”  Excess returns provided by private funds are attributable to reduced liquidity.  Lock up periods of 10+ years allow managers to find investments that would not be possible otherwise.  This premium is HIGHLY attractive in a world of low rates and cyclically high public equity valuations.  Where else can a pension or endowment find the rates of return required?

If the answer is, “No,” then Houston, we have a problem.  Money continues to flow into PE at a high rate.  A recent article in the FT (quoting data from FRG partner Preqin) show there is nearly $1.5 trillion in dry powder.  Factoring in leverage, there could be, in excess of, $5 trillion in capital waiting to be deployed.  In the case of a “No” answer, return chasing could have gone too far, too fast.

As mentioned, leverage in private capital funds is large and maybe growing larger.  If the liquidity risk premium has been bid away, what investors are left with may very well be just leveraged market risk.  What is assumed to be high alpha/low beta, might, in fact, be low alpha/high beta.  This has massive implications for asset allocation.

We are attempting to get our heads around this problem in order to help our clients understand the risk associated with their portfolios.

 

Dominic Pazzula is a Director with the Financial Risk Group specializing in asset allocation and risk management.  He has more than 15 years of experience evaluating risk at a portfolio level and managing asset allocation funds.  He is responsible for product design of FRG’s asset allocation software offerings and consults with clients helping to apply the latest technologies to solve their risk, reporting, and allocation challenges.

 

 

 

 

 

Private Equity and Debt Liquidity, the “Secondary” Market

A significant consideration in several aspects of Private Equity and Private Debt has been attributed to the liquidity (or lack thereof) of these investments.  The liquidity factor has been cited as a basic investment decision, influencing complex pricing, return of investment and financial risk management.  But as the environment has changed and matured, is liquidity being considered as it should be?

FRG’s ongoing research suggests that some of the changes this asset class are experiencing may be attributable to changes in the liquidity profile of these investments, which in turn may affect asset management decisions.  As modeling techniques continue to evolve in the asset management space, illustrated in our recent paper Macroeconomic Effects On The Modeling of Private Capital Cash Flows, their use as both an asset management tool and a risk management tool become more valuable.

The extreme importance placed on liquidity risk for all types of financial investments, and the financial community in general, to this point in time have been primarily associated with public investments.  However, a burgeoning “secondary” market in Private Equity and Private Debt will change the liquidity consideration of this asset class, a better understanding of which is necessary for investment managers active in this space.  Achieving this understanding will in turn provide private equity and private debt investment managers with another perspective with which to assess management decision aligning a bit more with that traditionally available for public investments. FRG is refining research into the liquidity of Private Capital investments through an appreciation of the dynamics of the environment to provide a better understanding of the behavior of these investments. Watch for more from us on this intriguing subject.

Read more about FRG’s work in Private Capital Forecasting via the VOR platform.

Dr. Jimmie Lenz is a Principal with the Financial Risk Group and teaches Finance at the University of South Carolina.  He has 30 years of experience in financial services, including roles as Chief Risk Officer, Chief Credit Officer, and Head of Predictive Analytics at one of the largest brokerage firms and Wealth Management groups in the U.S.

Change in CECL Approved by the FDIC

The Federal Deposit Insurance Corporation (FDIC) approved a measure that will allow a three-year phase in of the impact of CECL on regulatory capital yesterday (12/18/18). This change will also delay the impact on bank stress tests until 2020.  The change does not affect the rule itself but now allows banks the option to phase in impacts of CECL on regulatory capital over a three-year period. The details of this change can be found in the FDIC memorandum released yesterday.  The memorandum also adjusts how reserves for “bad loans” will be accounted for in regulatory capital.

The Financial Risk Group is recommending that banks utilize this time to better understand the impact, and the opportunities, that result from the mandated changes. “Time to implementation has been a limiting factor for some institutions to explore the identification of additional stakeholder value, but this should no longer be the case,” stated John Bell, FRG’s managing partner. FRG has (and is currently) partnered with clients of all types on a number of assessments and implementations of CECL.  The lessons to date regarding CECL are available in a number of our publications, including: CECL-Considerations, Developments, and Opportunities and Current Expected Credit Loss-Why The Expectations Are Different.

Subscribe to our blog!