Inside Banks’ New Computer-Security Incident Notification Requirements

Banking organizations in the US now have the shortest timeline for reporting security incidents under the new law

In November 2021, the United States’ three federal bank regulatory agencies, the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Board), and the Federal Deposit Insurance Corporation (FDIC) approved a final rule that changes the requirements for US banks around reporting cyber incidents.

The new rule includes strict guidance around the types of incidents that require a notification, who must be notified, and the timeline in which banks must issue the notification. Importantly, the new timeline requires banking organizations to notify their primary federal regulator of a significant incident as soon as possible and no later than 36 hours after they determine the incident has occurred – making this notification timeline the shortest yet among global incident response laws.

The final rule takes effect April 1, 2022 and organizations have until May 1, 2022 to be in full compliance.

Who is Subject to the New US Financial Interagency Guidance?

The final ruling around computer-security incident notification requirements applies to all banking organizations and bank service providers regulated by the OCC, the Board, and the FDIC. Specifically, it covers: 

  • Banking organizations: This differs based on the applicable federal regulator, as the definition for each one is consistent with the body’s supervisory authority:
    • OCC: A national bank, federal savings association, or federal branch or agency of a foreign bank.
    • Board: A US bank holding company, US savings and loan holding company, state member bank, the US operations of foreign banking organizations, and an edge or agreement corporation.
    • FDIC: An FDIC-supervised insured depository institution, including all insured state nonmember banks, insured state-licensed branches of foreign banks, and insured state savings associations.
  • Bank service providers: A bank service company or other person that performs “covered services,” which are services subject to the Bank Service Company Act.

Notably, designated financial market utilities (FMUs) are excluded from the definitions of both banking organization and bank service provider. This is because designated FMUs are already subject to regulation under Title VIII of the Dodd-Frank Act and are regulated by the Securities and Exchange Commission (SEC) or Commodity Futures Trading Commission (CFTC). Additionally, FMUs that are regulated by the Board are already subject to risk management standards.

Reduce your team’s routine work and help them focus on what inspires them.

Stop using spreadsheets, documents, and unwieldy ticketing systems for your incident readiness and response efforts.

How does the New US Financial Interagency Guidance Get Enforced?

Each of the three agencies is responsible for enforcing the guidance among the organizations under their supervisory authority. Depending on the agency, enforcement actions can typically include prompt corrective action directives, removal and prohibition orders, civil monetary penalties, cease and desist orders, and written agreements.

For example, the OCC is authorized to take enforcement action against national banks, federally chartered savings associations and their subsidiaries, federal branches and agencies of foreign banks, and IAPs. Formal enforcement actions from the OCC include capital directives, cease and desist orders, restitution orders, civil money penalty orders, safety and soundness orders, prohibition orders, and Securities Enforcement Actions, among others.

In terms of monetary penalties, all federal agencies are required to adjust their maximum penalty limits annually for inflation, meaning this number will change over time.

What are the Notification Requirements for Responding to a Security Event?

Banking organizations and bank service providers must issue certain notifications depending on the type of incident they experience. Specifically, the final rule defines two levels of incidents: 

  • Computer-security incident: An occurrence that (1) results in actual harm to the confidentiality, integrity, or availability of an information system or the information that the system processes, stores, or transmits, or (2) violates or imminently threatens to violate security policies, security procedures, or acceptable use policies.
  • Notification incident: An occurrence that materially disrupts or degrades (or is reasonably likely to do so) the ability to:
    • Carry out operations, activities, or processes, or the ability to deliver banking products and services to a material portion of customers in the ordinary course of business
    • Support business lines, including associated operations, services, functions, and support, that would result in a material loss of revenue, profit, or franchise value upon failure
    • Run operations, including associated services, functions, and support, that would pose a threat to the financial stability of the US upon failure

Notification Requirements for Banking Organizations

Banking organizations only need to issue a notification under the final rule if they experience a notification incident, not a computer-security incident. 

In such cases, banking organizations must notify their applicable regulator “as soon as possible” and no later than 36 hours after determining that a notification incident has occurred. While this timeline is the shortest yet among global privacy laws, it’s important to note that the clock doesn’t start until the banking organization has determined that (1) an incident has occurred and (2) it meets the threshold for issuing a notification.

The exact requirements for issuing this notification differ slightly for each of the three regulators, but they all must be made to the agency’s designated point of contact through email, telephone, or similar methods authorized by the regulator. 

Finally, the rule does not require any specific content or formatting for the notification, but it does ask banking organizations to provide general information about what they know regarding the incident.

Notification Requirements for Bank Service Providers

Bank service providers must issue a notification if they experience a computer-security incident that materially disrupts or degrades covered services for four or more hours.

In these cases, bank service providers must notify at least one bank-designated point of contact at each affected banking organization as soon as possible after determining that a computer-security incident has occurred. The rule does not specify any more detailed timelines for “as soon as possible.”

The bank-designated point of contact should be an email address, phone number, or other means of contact provided by the banking organization. If the organization has not provided a point of contact, the bank service provider should notify the banking organization’s CEO and CIO or two individuals of comparable responsibilities through reasonable means.The rule also doesn’t specify any particular formatting or language for these notifications. 

Importantly, the rule makes an exception to issuing a notification for any disruptions of service due to scheduled maintenance, testing, or software updates if the bank service provider previously communicated that disruption to banking organization customers.

What Can Trigger a Notification Under the New US Financial Interagency Guidance?

The trigger for a notification under the final rule differs for banking organizations and bank service providers.

For banking organizations, examples of notification incidents include:

  • Ransomware attack: A ransomware attack occurs when hackers use malware to steal data and hold it captive in exchange for a ransom payment. Any such attack that targets a core banking system or backup data qualifies as a notification incident, regardless of whether or not the banking organization was able to retrieve the data.
  • Distributed denial of service attack: A distributed denial of service (DDoS) attack occurs when hackers overwhelm a server, service, or network or its infrastructure with a host of fake visitors to create a “traffic jam.” A DDoS attack that disrupts customer account access for an extended period of time (e.g. more than four hours) qualifies as a notification incident.
  • Failed system change: If a banking organization plans a system upgrade or change that fails and, as a result, leads to widespread outages for customers and employees in which they can’t access their user accounts, then it qualifies as a notification incident. Even though there was no malicious activity at play in this case, a material disruption to the banking organization’s ability to deliver services has occurred.

For bank service providers, examples of computer-security incidents that require a notification include the following, only when they meet the thresholds of causing actual harm, materially disrupting or degrading covered services, and doing so for at least four hours:

  • Watering hole attack: A watering hole attack occurs when hackers monitor individual behavior and infect websites that their intended targets visit regularly in order to gain access to their computers and networks. If successful, the hacker can then view, alter, steal, or destroy any data to which the victim has access, which can cause actual harm to the integrity and confidentiality of data.
  • Exfiltration: Exfiltration is part of most cyber attacks. It occurs when hackers gain unauthorized access to move data to their own servers or devices. If this access causes actual harm to the confidentiality or integrity of data, then it can qualify as a notifiable computer-security incident for bank service providers.
  • Drive-by download attack: A drive-by download attack occurs when hackers install a malicious program on a user’s computer, typically by obscuring it within a legitimate website or application. In doing so, the hacker can gain access to hijack the device or steal data. The former case can cause a material degradation of service for an extended period of time, while the latter can cause actual harm to the confidentiality and integrity of data.

Need help with an incident response strategy?

Leverage the BreachRx platform to build an actionable incident response plan today!

How Can Banks Prepare to Comply with the New US Financial Interagency Guidance?

The final rule from the US financial agencies is not the only requirement governing incident response notification for financial organizations. Keeping track of all of these rules requires a proactive approach to monitoring laws (and changes to them) and preparing response plans accordingly. This is particularly the case given the new rule’s 36 hour reporting requirement for banking organizations.

To achieve this proactive approach, banking organizations and bank service providers must:

  • Ensure visibility into practices for collecting, storing, and using data
  • Adopt clear security procedures to protect and monitor data
  • Assign responsibility over incident response measures within the organization

Building on this foundation, banking organizations and bank service providers should also prepare for three critical phases of regulatory and incident response in order to fully comply with the new interagency guidance:

Readiness

What: Readiness is about proactively preparing regulatory and incident response plans so that organizations can jump into action immediately once an incident occurs, without having to first figure out a plan of action.

Why: Proactive preparation is necessary in order to meet tight response timelines, like the 36 hours required for banking organizations. Additionally, a fast response can help reduce the costs associated with a breach and accelerate the return to business as usual.

How: Review the requirements outlined in regulation like that from the US financial agencies and in customer and partner contracts, then develop clear regulatory and incident response plans that are ready to use for each one. Make sure to organize and document which teams and roles should execute what tasks for different events. Put these plans to the test through simulations and tabletop exercises.

Response

What: Response is the manner in which organizations act when an incident does occur, including how they identify what happened, how they move to correct the issue, and how they notify the appropriate parties.

Why: Responding to an incident appropriately (based on regulatory requirements) and quickly is essential to complying with regulations – and therefore avoiding penalties and risk – and to retaining customer trust.

How: Identify the circumstances of the incident, including exactly what happened, how and when it occurred, what data was affected, potential consequences, and if it meets the threshold for notification. Then collaborate across the organization to issue notifications to the appropriate agencies and customers based on regulatory requirements and to correct the issue, if possible. Consider providing a safe haven where teams can communicate outside operational environments that might be compromised during an incident.

Ongoing Management

What: Ongoing management means making regulatory and incident response a routine activity. It requires regularly revisiting regulatory and incident response plans to keep them up to date as regulations, contracts, and external threats change and to improve them following a previous response effort. It also means measuring and reporting on the types and frequency of incidents and on response efficiency, so privacy teams can analyze and improve their performance.

Why: Paying continued attention to incident response plans is essential to ensure those plans reflect the latest requirements and that everyone involved is aligned on their responsibilities so that the organization can jump into action quickly and confidently when the next incident occurs.

How: Introduce a centralized dashboard for reporting and monitoring on incident response plans and updates to regulations and contracts. Additionally, provide access to this dashboard so that stakeholders can remain aligned on any changes and keep aware of their responsibilities.

What Sets the US Financial Interagency Guidance Apart from Other Regulations Governing Banks?

The final rule from the OCC, the Board, and the FDIC is not the only incident response regulation that applies to banks in the US.

For example, the Gramm-Leach-Bliley (GLB) Act requires certain financial institutions to notify their federal regulator of instances of unauthorized access to sensitive consumer information, and the Bank Secrecy Act requires certain financial institutions to report incidents of suspicious activity, such as money laundering.

Meanwhile, several states have also started to introduce their own regulations. In New York, the 23 NYCRR 500 requires financial institutions to report qualified cybersecurity events to the state’s department of financial services within 72 hours.

The new interagency guidance is unique from these regulations – and other global laws that govern more than just financial institutions – in three critical ways:

1) High threshold for notification

The final rule has different thresholds for notification for banking organizations and bank service providers, but both thresholds are relatively high compared to other global incident notification regulations.

For example, banking organizations only need to issue a notification (and only do so to their regulating agency) in the case of a notification incident, which is an event that materially disrupts or degrades certain operations. This threshold mostly covers large-scale attacks like ransomware and DDoS.

Meanwhile, bank service providers only need to issue a notification if they experience a computer-security incident that materially disrupts or degrades covered services for four or more hours. The length of time of the incident is a fairly unique nuance in this regulation compared to other global regulations.

These thresholds stand in contrast to other laws, such as Singapore’s PDPA, under which even a non-malicious activity like mistakenly exposing data can trigger a notification.

2) Focus of the definition of harm to data and systems, not people

Like many incident notification regulations, the final rule’s definition of notifiable incidents invokes a standard of harm. However, unlike many other regulations, this standard of harm focuses on data and systems, not people.

Specifically, the final rule looks at harm to the confidentiality, integrity, or availability of an information system or the information that the system processes, stores, or transmits. It does not explicitly define harm, though. In contrast, New Zealand’s Privacy Act 2020 looks at harm to individuals based on unauthorized access to personal information – a practice that is much more common among laws like these.

What sets the final rule apart even further is the threshold for harm, as it requires an incident to “result in actual harm” to qualify as a computer-security incident (and that alone does not make it a reportable incident for banking organizations, only for bank service providers). Most other laws use a much lower threshold of “likely to cause harm.”

3) Shortened notification window

The final rule requires banking organizations to notify their federal regulator of a notification incident as soon as possible and no later than 36 hours after determining the incident occurred. The 36 hour window is the shortest yet among global regulations, making it essential for banking organizations to have a clear process in place so they can respond quickly, confidently, and completely. However, there’s a lot more than meets the eye.

The goal of this short timeline is to help the federal agencies understand the impact of incidents quickly and take fast action to offer protection and avoid additional fallout. In pursuit of this goal, the rule only asks that banking organizations share general information about what they know. This stands in contrast to most other incident notification laws, which have strict requirements for the format and contents of notifications issued by companies following an incident.

The other important area to note in this discussion is that the 36 hour timeline only starts after a banking organization has determined a notification incident has actually occurred. This means time spent investigating the incident to determine if it meets the threshold for a notification incident does not count toward the 36 hours. In contrast, the clock for the 72 hour reporting window under GDPR starts when an organization first becomes aware of an incident (and therefore it includes any time spent investigating).

It’s Time to Make Proactive Regulatory and Incident Management a Priority

The new US financial interagency guidance creates another set of rules banking organizations and bank service providers must follow when incidents occur. Knowing that these organizations are already subject to other regulations within the US and around the world, it’s essential to make proactive regulatory and incident management a priority.

Specifically, taking the time to understand the requirements of each regulation and proactively develop a clear response plan for when an incident occurs is essential for organizations to be able to respond quickly, completely, and effectively in accordance with all of the applicable regulations. This is especially important for banking organizations under the new financial interagency guidance given the 36 hour timeline for reporting.

Making this proactive approach a reality requires organizations to regularly monitor the introduction of new rulings and changes to existing regulations, introduce tailored response plans for each regulation, and assign clear responsibility for each plan. Critically, this effort must be ongoing to reflect the fast pace of change to global incident response regulations.

Recent Posts

Categories