AI in FIs: Learning Types and Functions of Machine Learning Algorithms

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

This blog series has covered how a financial institution (FI) can use machine learning (ML) and how these algorithms can augment existing methods for mitigating financial and non-financial risk. To tie it all together, the focus now will be on different learning types of ML algorithms:

  1. Supervised Learning
  2. Unsupervised Learning
  3. Semi-Supervised Learning

Deciding which learning type and ultimately which algorithm to use depends on two key factors: the data and the business use. In regards to data, there are two “formats” in which it exists. The first type is structured data. This type of data is organized and often takes the form of tables (columns and rows).  The second type is unstructured data. This type of data may have a structure of its own, but is not in a standardized format. Examples of these include PDFs, recorded voice, and video feed. This data can provide great value but will need to be reformatted so an algorithm can consume it.

 

Learning Types and Functions of MLA

 

Supervised learning algorithms draw inferences from input datasets that have a well-defined dependent, or target, variable. This is referred to as labeled data. Consider the scenario when an FI wants to predict loss due to fraud. For this they would need a labeled dataset containing historical transactions with a target variable that populates for a known fraudulent transaction. The FI might then use a decision tree to separate the data iteratively into branches to determine estimates for likelihood of fraud. Once the decision tree captures the relationships in the data, it can then be deployed to estimate the potential for future fraud cases.

Unsupervised learning algorithms draw inferences from input datasets with an undefined dependent variable. This is referred to as unlabeled data. These kinds of algorithms are typically used for pre-work to prepare data for another process. This work ranges from data preparation to data discovery and, at times, includes dimensionality reduction, categorization, and segmentation. Returning to our fraud example, consider the data set without the target variable (i.e., no fraud indicator). In this scenario, the FI could use an unsupervised learning algorithm to identify the most suspicious transactions through means of clustering.

Sometimes, a dataset will have both labeled and unlabeled observations, meaning a value for the target variable is known for a portion of the data. Data in this case can be used for semi-supervised learning, which is an iterative process that utilizes both supervised and unsupervised learning algorithms to complete a job. In our fraud example, a neural net may be used to predict likelihood of fraud based on the labeled data (supervised learning). The process can then use this model, along with a clustering algorithm (unsupervised learning), to assign a value to the fraud indicator for the most suspicious transactions in the unlabeled data.

 To learn more about ML algorithms and their applications for risk mitigation, please contact us or visit our Resources page for other ML and AI material, including the New Machinist Journal Vol. 1 – 5 .

 Hannah Wiser is an associate consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

List of Terms to Know

AI in FIs: Introducing Machine Learning Algorithms for Risk

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

For any application of machine learning (ML) being considered for industry practice, the most important thing to remember is that business needs must drive the selection and design of the algorithm used for computation. A financial institution (FI) must be smart about which of these advanced tools are deployed to generate optimal value for the business. For many FIs, this “optimal value” can refer to one of two categories: increasing profitability or mitigating risk. In this post, we will focus on the uses cases for ML specifically related to risk.

Risk can be broken out between financial risk and nonfinancial risk. Financial risk involves uncertainty in investment or business that can result in monetary loss.  For example, when a homeowner defaults on a loan, this means the lender will lose some or all those funds.

Nonfinancial risk, on the other hand, is loss an FI experiences from consequences not rooted in financial initiatives. Certain events, such as negative news stories, may not be directly related to the financial side of the business but could deter potential customers and hirable talent. Some areas of risk may be considered either financial or nonfinancial risk, depending on the context.

When properly employed, ML enhances the capabilities of FIs to assess both their financial and nonfinancial risk in two ways. First, it enables skilled workers to do what they do best because they can off-load grunt work, such as cleaning data, to the machine. By deploying a tool to support existing (and cumbersome) business operations, the analyst has more time to focus on their specialty. Second, a machine has the technical capability to reveal nuance in the data that even a specialist would not be able to do alone. This supplements the analyst’s understanding of the data and enriches the data’s worth to the business.

The image below elaborates on the many kinds of risk managed by an FI, in addition to practical ways ML can supplement existing methods for risk mitigation.

More complex algorithms may do a better job of fitting the data, model at a higher capacity, or utilize non-traditional types of data (e.g., images, voice, and PDFs, etc.), but this all comes at a cost. The intricacies of implementing an ML algorithm, the commitment of time required to build a model (i.e., tuning hyperparameters can take days), and the management of unintended bias and overfitting render ML a considerable investment of resources. Not to mention, the robust requirements for computational power may require an FI to do some pre-work if a stable and capable infrastructure is not already in place.

As innovative as ML can be, any process will only be successful in industry if it produces value beyond its costs. Thanks to advances in computational power and available data, new approaches (e.g., neural nets) have broadened the universe of ML and its relevance, as well as better enabled traditional methods of ML (e.g., time series models). We will expand more on specific algorithms and risk mitigation use cases in later discussions.

Interested in reading more? Subscribe to the FRG blog to keep up with AI in FIs.

Hannah Wiser is an assistant consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

 

AI in FIs: Foundations of Machine Learning in Financial Risk

Through the lens of Financial Risk, this blog series will focus on Financial Institutions as a premier business use case for Artificial Intelligence and Machine Learning.

Today, opportunities exist for professionals to delegate time-intensive, dense, and complex tasks to machines. Machine Learning (ML) has the ability to automate Artificial Intelligence (AI) and is becoming much more robust as technological advances ease and lessen resource constraints.

Financial Institutions (FI) are constantly under pressure to keep up with evolving technology and regulatory requirements. Compared to what has been used in the past, modern tools have become more user-friendly and flexible; they are also easily integrated with existing systems. This evolution is enabling advanced tools such as ML to regain relevance across industries, including finance.

So, how does ML work? Imagine someone is learning to throw a football. Over time, the to-be quarterback is trained to understand how to adjust the speed of the ball, the strength of the throw, and the path of trajectory to meet the expected routes of the receivers. In a similar way, machines are trained to perform a specific task, such as clustering, by means of an algorithm. Just as the quarterback is trained by a coach, a machine learns to perform a specific task from an ML algorithm. This expands the possibilities for ways technology can be used to add value to the business.

What does this mean for FIs? The benefit of ML is that value can be added in areas where efficiency, prediction, and accuracy are most critical.  To accomplish this, the company aligns these four components: data, applications, infrastructure, and business needs.

The level of data maturity of FIs determines their capacity for effectively utilizing both structured and unstructured data. A well-established data governance framework lays the foundation for proper use of data for a company. Once their structured data is effectively governed, sourced, analyzed, and managed, they can then employ more advanced tools such as ML to supplement their internal operations. Unstructured data can also be used, but the company must first harness the tools and computing power capable of handling it.

Many companies are turning to cloud computing for their business-as-usual processes and for deploying ML. There are options for hosting cloud computing either on-premises or with public cloud services, but these are a matter of preference. Either method provides scalable computing power, which is essential when using ML algorithms to unlock the potential value that massive amounts of data provides.

Interested in reading more? Subscribe to the FRG blog to keep up with AI in FIs.

Hannah Wiser is an assistant consultant with FRG. After graduating with her Master’s in Quantitative Economics and Econometrics from East Carolina University in 2019, she joined FRG and has worked on projects focusing on technical communication and data governance.

 

 

Avoiding Bureaucratic Phrasing

Employees developed plans during the course of the project in order to create a standardized process with respect to regulation guidelines.

Did you understand that sentence reading through the first time? The sentence is filled with bureaucratic phrasing which makes the information more complex than necessary.

In the workplace, “bureaucratic” means involving complicated rules and processes that make something unnecessarily slow and difficult. People tend to use this style of phrasing because they believe there is permanence in writing. Say something and it’s gone, but write it down and it’s with us forever.

When people believe their writing is out there for all to see, they want to sound as professional and as knowledgeable as possible. But adding bureaucratic language isn’t the best way to sound like an expert. Many complex phrases read better when they are stripped down into simple words. For example, in the original sentence above, “in order to” can be reduced to “to” and “during the course of” can be simplified to “during”:

Employees developed plans during the project to create a standardized process with respect to regulation guidelines.

Using bureaucratic phrasing can make readers feel inadequate and indirectly exclude them from the conversation. This is why using plain, straightforward language in your writing is recommended instead.

The key is learning how to turn those overly complex phrases into simple words that mean the same thing. Here are some examples:

Bureaucratic PhraseSimple Word / Phrase
Along the lines ofLike
As of this dateYet, still, or now
At all timesAlways
Due to the fact thatBecause
Concerning the matter ofAbout
For the purpose ofFor, to
In spite of the fact thatAlthough

One guideline is to avoid words and phrases that you would not use in everyday speech. You would never say, “May I look over your paper in the near future in order to review it?” Why write it?

The goal of any documentation, whether it be a technical design document or an email, is to state your main point in a simple manner. Your readers should be able to easily find important information, quickly understand what information they should store for later use, and immediately recognize what is being asked of them. Avoiding bureaucratic phrasing can help you accomplish this.

Resources:

  • Hopkins, Graham. Why bureaucratic jargon is just a pompous waste of words. 12 Sept. 2000. The Guardian.
  • Richard Johnson-Sheehan. “Technical Communication Today: Special Edition for Society for Technical Communication Foundation Certification”. Fifth Edition.

Samantha Zerger, business analytics consultant with FRG, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

RELATED:

Improving Business Email

 

Is a Pandemic Scenario Just a Recession Scenario?

Recently, I wrote about how a pandemic might be a useful scenario to have for scenario analysis. As I thought about how I might design such a scenario I considered: should I assume a global recession for the pandemic scenario?

A pandemic, by definition, is an outbreak of a disease that affects people around the globe. Therefore, it is reasonable to think that it would slow the flow of goods and services through the world. Repercussions would be felt everywhere – from businesses reliant on tourism and travel to companies dependent on products manufactured in countries hit the hardest.

For an initial pass, using a recession seems sensible. However, I believe this “shortcut” omits a key trait needed for scenario development: creativity.

The best scenarios, I find, come from brainstorming sessions. These sessions allow challenges to be made to status quo and preconceptions. They also help identify risk and opportunity.

To immediately consider a recession scenario as “the pandemic scenario,” then, might not be advantageous in the long run.

As an exercise, I challenged myself to come up with questions that aren’t immediately answered when assuming a generic recession. Some that I asked were:

  • How do customers use my business? Do they need to be physically present to purchase my goods or use my services?
  • How will my business be impacted if my employees are not able to come into work?
  • What will happen to my business if there is a temporary shortage of a product I need? What will happen if there is a drawn-out shortage?
  • How dependent is my business on goods and services provided by other countries? Do these countries have processes in place to contain or slow down the spread of the disease?
  • Does my business reside in a region of the country that makes it more susceptible to the impact of a pandemic (e.g., ports, major airports, large manufacturing)?
  • How are my products and services used by other countries?
  • How can my company use technology to mitigate the impacts of a pandemic?
  • Is there a difference in the impact to my company if the pandemic is slow moving versus fast moving?

These are just a few of the many questions to consider for this analysis. Ultimately, the choice of whether to use a recession or not rests with the scenario development committee. To make the most informed decision, I would urge the committee to make questions like these a part of the discussion rather than taking the “shortcut” approach.

Jonathan Leonardelli, FRM, Director of Business Analytics for FRG, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.


RELATED:

Do You Have a Pandemic Scenario?

VOR Scenario Builder: Obtaining Business Insight From Scenarios

 

Do You Have a Pandemic Scenario?

A recent white paper I wrote discussed the benefits of scenario analysis. The purpose of scenario analysis is to see how economic, environmental, political, and technological change can impact a company’s business. The recent outbreak of COVID-19 (“Coronavirus”) is a perfect example of how an environmental event can have an impact on the local and, as we are finding out, global economy.

As the world watches this virus spread, I suspect there are some companies who are thankful they created a pandemic scenario. Right now, they are probably preparing to take steps to enact the procedures they created after running the scenario. I also suspect there are other companies who might be in a bit of panic as they wonder how much this will impact them. To those companies I suggest they start considering the impacts now. While we hope this will not reach full pandemic level, the future is unknown.

 

Jonathan Leonardelli, FRM, Director of Business Analytics for FRG, leads the group responsible for model development, data science, documentation, testing, and training. He has over 15 years’ experience in the area of financial risk.

CECL Preparation: Handling Missing Data for CECL Requirements

Most financial institutions (FI’s) find that data is the biggest hurdle when it comes to regulatory requirements: they don’t have enough information, they have the wrong information, or they simply have missing information. With the CECL accounting standard, the range of data required to estimate expected credit losses (e.g., reasonable and supportable forecasts) grew from what was previously required. While this is a good thing in the long run (as the requirements gradually help FI’s build up their inventory of clean, model-ready data), many FI’s are finding it difficult to address data problems right now. In particular, how to handle missing data is a big concern.

Missing data becomes a larger issue because not all missing data is the same. Classifications, based on the root causes of the missing data, are used as guidance in choosing the appropriate method for data replacement. The classifications consist of:

  1. Not missing at random – the cause of the missing data is related to the missing values
    • For example, CLTV values are missing when previous values have exceeded 100.
  2. Missing at random (MAR) – the cause of the missing data is related to observed values of other variables
    • For example, DTI values are missing when the number of borrowers is 2 or more.
  3. Missing completely at random (MCAR) – the cause of the missing data is unrelated to values of the variable or other variables; data is missing due to an entirely random process
    • For example, LTV values are missing because a system outage caused recently loaded data to be reset to default value of missing.

Once a classification is made for the reason of missing data, it is easier to determine its resolution. For example, if the data is MCAR there is no pattern and therefore, involves no loss of information if those observations with the missing values are dropped. Unfortunately, data is rarely MCAR.

The following table represents some methods (not meant to be all inclusive) a FI may use to handle other, more common, data issues.

MethodDescriptionProsCons
Last observation carried forward / backwardFor a given account, use a non-missing value in that variable to fill missing values before and/or after it• Simple
• Uses actual value that the account has
• Useful for origination variables
• Assumes stability in account behavior
• Assumes data is MCAR

Mean ImputationUser of the verage value of the observed observations for the missing value• Simple• Distorts empirical distribution of data
• Does not use all information in the data set
Hot decking and cold deckingReplace missing values with a value from a similar observation in the sample (cold decking is when one uses a similar observation out of sample)• Conceptually straightforward
• Uses existing relationships in the data
• Can be difficult to define characteristics of a similar observation
• Continuous data can be problematic
• Assumes data is MAR
RegressionUse univariate or multivariate regression models to impute missing value – dependent variable is the variable that is missing• Fairly easy to implement
• Uses existing relationships in the data
• Can lead to overstating relationships among the variables
• Estimated values may fall out of accepted ranges
• Assumes data is MAR

Understanding why the data is missing is an important first step in resolving the issue. Using the imputation methods outlined above can provide a temporary solution in creating clean historical data for methodology development. However, in the long run, FI’s will benefit from establishing a more permanent solution by constructing data standards/procedures and implementing a robust on-going monitoring process to ensure the data is accurate, clean, and consistent.

 

Resources:

  1. FASB Accounting Standards Update, No. 2016-13, Financial Instruments – Credit Losses (Topic 326).

Samantha Zerger, business analytics consultant with FRG, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

CECL Preparation: How Embracing SR 11-7 Guidelines Can Support the CECL Process

The Board of Governors of the Federal Reserve System’s SR 11-7 supervisory guidance (2011) provides an effective model risk management framework for financial institutions (FI’s). SR 11-7 covers everything from the definition of a model to the robust policies/procedures that should exist within a model risk management framework. To reduce model risk, any FI should consider following the guidance throughout internal and regulatory processes as its guidelines are comprehensive and reflect a banking industry standard.

The following items and quotations represent an overview of the SR 11-7 guidelines (Board of Governors of the Federal Reserve System, 2011):

  1. The definition of a model – “the term model refers to a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”
  2. A focus on the purpose/use of a model – “even a fundamentally sound model producing accurate outputs consistent with the design objective of the model may exhibit high model risk if it is misapplied or misused.”
  3. The three elements of model risk management:
    • Robust model development, implementation, and use – “the design, theory, logic underlying the model should be well documented and generally supported by published research and sound industry practice.”
    • Sound model validation process – “an effective validation framework should include three core elements: evaluation of conceptual soundness, …, ongoing monitoring, …, and benchmarking, outcomes analysis, …”
    • Governance – “a strong governance framework provides explicit support and structure to risk management functions through policies defining relevant risk management activities, procedures that implement those policies, allocation of resources, and mechanisms for evaluating whether policies and procedures are being carried out as specified.”

The majority of what the SR 11-7 guidelines discuss applies to some of the new aspects from the accounting standard CECL (FASB, 2016). Any FI under CECL regulation must provide explanations, justifications, and rationales for the entirety of the CECL process including (but not limited to) model development, validation, and governance. The SR 11-7 guidelines will help FI’s develop effective CECL processes in order to limit model risk.

Some considerations from the SR 11-7 guidelines in regards to the components of CECL include (but are not limited to):

  • Determining appropriateness of data and models for CECL purposes. Existing processes may need to be modified due to some differing CECL requirements (e.g., life of loan loss estimation).
  • Completing comprehensive documentation and testing of model development processes. Existing documentation may need to be updated to comply with CECL (e.g., new models or implementation processes).
  • Accounting for model uncertainty and inaccuracy through the understanding of potential limitations/assumptions. Existing model documentation may need to be re-evaluated to determine if new limitations/assumptions exist under CECL.
  • Ensuring validation independence from model development. Existing validation groups may need to be further separated from model development (e.g., external validators).
  • Developing a strong governance framework specifically for CECL purposes. Existing policies/procedures may need to be modified to ensure CECL processes are being covered.

The SR 11-7 guidelines can provide FI’s with the information they need to start their CECL process. Although not mandated, following these guidelines overall is important in reducing model risk and in establishing standards that all teams within and across FI’s can follow and can regard as a true industry standard.

Resources:

  1. Board of Governors of the Federal Reserve System. “SR 11-7 Guidance on Model Risk Management”. April 4, 2011.
  2. Daniel Brown and Dr. Craig Peters. “New Impairment Model: Governance Considerations”. Moody’s Analytics Risk Perspectives. The Convergence of Risk, Finance, and Accounting: CECL. Volume VIII. November 2016.
  3. Financial Accounting Standards Board (FASB). Financial Instruments – Credit Losses (Topic 326). No. 2016-13. June 2016.

Samantha Zerger, business analytics consultant with FRG, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

Improve Your Problem-Solving Skills

This is the fifth post in an occasional series about the importance of technical communication in the workplace.

 “Work organisations are not only using and applying knowledge produced in the university but they are also producing, transforming, and managing knowledge by themselves to create innovations (Tynjälä, Slotte, Nieminen, Lonka, & Olkinuora, 2006)”.

Problem-solving skills are rooted in the fact that you must learn how to think, not what to think. Most classes in high schools and colleges teach you what to think (e.g., history dates, mathematical equations, grammar rules), but you must learn further problem-solving skills in order to help you learn how to think.

In the technical workplace, you are expected to be able to be given a problem and come up with a solution; a solution that possibly has never been thought of before. Employers are looking for people that have the right skills in order to do that very thing. Because of this, most interview processes will inevitably include at least one problem-solving question.

  • “How have you handled a problem in your past? What was the result?”
  • “How would you settle the concerns of a client?”
  • “How would you handle a tight deadline on a project?”

The way you answer the problem-solving question usually gives the interviewer a good sense of your problem-solving skills. Unfortunately, for the interviewee, problem solving is grouped into a BROAD skill set made up of:

  • Active listening: in order to identify that there is a problem
  • Research: in order to identify the cause of the problem
  • Analysis: in order to fully understand the problem
  • Creativity: in order to come up with a solution, either based on your current knowledge (intuitively) or using creative thinking skills (systematically)
  • Decision making: in order to make a decision on how to solve the problem
  • Communication: in order to communicate the issue or your solution to others
  • Teamwork: in order to work with others to solve the problem
  • Dependability: in order to solve the problem in a timely manner

So how do you, as the interviewee, convey that you have good problem-solving skills? First, acknowledge the skill set needed to solve the problem relating to each step in the problem-solving process:

Step in Problem SolvingSkill Set Needed
1. Identifying the problemActive listening, research
2. Understanding and structuring the problemAnalysis
3. Searching for possible solutions or coming up with your own solutionCreativity, communication
4. Making a decisionDecision making
5. Implementing a solutionTeamwork, dependability, communication
6. Monitoring the problem and seeking feedbackActive listening, dependability, communication

Then, note how you are either planning to or are improving your problem-solving skills. This may include gaining more technical knowledge in your field, putting yourself in new situations where you may need to problem solve, observing others who are known for their good problem-solving skills, or simply practicing problems on your own. Problem solving involves a diverse skill set and is key to surviving in a technical workplace.

Resources:

  1. Problem-Solving Skills: Definitions and Examples. Indeed Career Guide.
  2. Tynjälä, Päivi & Slotte, Virpi & Nieminen, Juha & Lonka, Kirsti & Olkinuora, Erkki. (2006). From university to working life: Graduates’ workplace skills in practice.

 

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

 

 

The Importance of Good Teamwork in the Technical Workplace

This is the fourth post in an occasional series about the importance of technical communication in the workplace.

Daily teamwork is an essential part of technical workplace success. Strong technical communication and collaboration skills are necessary to be an active and successful member of a team working to achieve a common goal.

When thinking about the term teamwork, the collaborative effort of a team, many associate Tuckman’s stages (1965) of team development: forming, storming, norming, and performing. In 1977, Tuckman added a fifth stage: adjourning.

  1. Forming – This stage involves the meeting of all the team members, discussions of each member’s skills, and dividing up of responsibilities.
  2. Storming – This stage involves the sharing of ideas among the members. Team leaders must resolve any challenges/competition between members and ensure that the project is moving forward.
  3. Norming – This stage involves the complete understanding of a common goal and responsibilities among members. Major conflicts are resolved.
  4. Performing – This stage involves the team members working efficiently together with minimal issues. The project work should continue on schedule.
  5. Adjourning – This stage involves the team members reaching their project end and going their separate ways onto new projects.

Although Tuckman’s stages represent a standard team development flow, there is much more to think about as a member of that team. How should I converse with others during a conflict in the Storming stage? How should I discuss my skills with other members in the Forming stage? How do I ensure that I do not fall behind the project schedule in the Performing stage?

Here are some tips that may help you in the different stages of team development:

  • Be flexible in your work. In the Forming stage, you may be asked to complete a task that you may not particularly enjoy. Thus, in order to be a good team member, you must be flexible enough to say that you will complete the task to reach the team’s common goal.
  • Complete your tasks in a timely manner. In the Performing stage, keep track of your own responsibilities and when the tasks are due. Communicate freely with the leader of the team throughout the stage to keep up-to-date on the team’s activities. If possible, finish your tasks early and offer help to other team members.
  • Avoid conflicts. In the Storming stage, some conflict is certain but there are ways to avoid larger conflicts with other team members.
    • Be aware of other member’s attitudes towards certain topics. Speak about those topics in smaller settings.
    • Offer compromises when tensions start to rise. The compromises might seem more appealing than the associated conflicts.
    • Attempt to resolve conflicts as soon as possible. The quicker they are resolved, the quicker the project can move forward.
    • Communicate face-to-face. Sometimes words get lost in translation.
  • Communicate often. Throughout the stages, make it a point to communicate with the team leader and other members of the team on a continual basis. This may include sending your status updates to the team leader, asking questions about your tasks, or simply checking in with other members.

All in all, it is important to be a team player. Every team member should be on the same side: the one that completes the project efficiently, successfully, and with minimal headaches.

Resources

  • Tuckman, B. W. (1965). Development sequence in small groups. Psychological Bulletin, 63, 384-399.

Samantha Zerger, business analytics consultant with the Financial Risk Group, is skilled in technical writing. Since graduating from the North Carolina State University’s Financial Mathematics Master’s program in 2017 and joining FRG, she has taken on leadership roles in developing project documentation as well as improving internal documentation processes.

Subscribe to our blog!