Community banks work with policies to address the hidden risks of AI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Community banks as a group are in their infancy. Integrating Artificial Intelligence in their operation. Written formal policies on its use are also far from practical.

But both actions can, and should, happen at the same time.

“It's a little shocking to see that many community-based institutions are barely dipping their toes in the water,” said Jim Perry, senior strategist at Market Insights, a consulting firm for community banks and credit unions. “The idea of ​​establishing the kind of governance and oversight that will help them navigate AI in the future is just coming across their radar screen.”

While secure and compliant practices for artificial intelligence and generative AI There is concern for financial institutions. Of all sizes, they are Especially difficult for community banks. These organizations partner with third parties rather than building in-house capabilities, which means less visibility into other parties' practices and an open door. Fourth Party Risk.

“These concerns are not new or unique to AI tools,” said Jasper Sneff Nanni, principal at financial services consulting firm FS Vector. “They are usually within the scope of existing privacy, information security, third-party risk management, and model risk management programs. However, the recent popularity of these tools can create situations where employees do not realize that There are risks.”

For example, bank employees may use a third-party AI product to transcribe audio from a conference call or summarize client documents without realizing that, in some cases, confidential company data or customer personal information may be exposed. Personally identifiable information may be leaked back to the tool provider. Sniff Nanny FS Vector has written policies for both banks and fintechs in recent months, and Sneff Nanni found that the main concerns about AI tools are privacy and information security risks and exposing the company to model risk.

He advises banks to turn to enterprise versions of tools from companies like OpenAI, Google and Anthropic, with more transparent data use policies than their free, off-the-shelf versions.

The question of how far to take AI product generation to the consumer arose at Merchants & Marine Bank in Pascagoula, Mississippi.

The $687 million-asset firm is “always looking to leverage technology to top our weight class,” said Jeff Trammell, its chief operations officer. “AI can be a great leveler.” He and other employees began experimenting with ChatGPT for fun, such as having the bot write a Smashing Pumpkins-tinged song about a very angry chief risk officer. (“It was a great song,” Trammell said. “It sounded just like that. [lead singer] wrote Billy Corgan.”) Once they saw the potential of ChatGPT, they realized a practical use: to get the ball rolling on the policy governing their two-year-old cannabis banking program.

“You can buy off-the-shelf policies and procedures for the SBA. [Small Business Administration] Lending and traditional mortgage lending, but for cannabis banking, there's nothing,” Trammell said. “You can ask different banks, but oftentimes these programs are secret. AI can help you detect the angles of such high-risk activity.”

The team was careful to keep sensitive information away from ChatGPT. Instead, they asked questions to start their conversation, such as, “What are the risk factors for a small community bank in developing a cannabis banking program?”

“We developed the entire program in less than 60 days,” Trammell said. “ChatGPT helped us get from the basics quickly.”

At this point, Merchants and Marines have progressed as far as Trammell feels comfortable with ChatGPT. Before he and his colleagues explore other generative AI tools, Trammel says they will lay down rules about data governance and control, such as who is allowed to use these products at the bank and What questions can they ask without crossing a line?

“The big question is how do we protect the public's trust that our information is confidential?” Trammell said.

Another challenge is how to get started.

Julian Thurlow, president and CEO of the $900 million-asset Reading Cooperative Bank in Reading, Massachusetts, is approaching AI cautiously and prefers to wait for more guidance from regulators. For now, Reading's use of AI is limited to fraud detection and transaction blocking in the peer-to-peer payment system it uses, Chuck.

“You have to know about this place before you run with it,” he said.

Bankwell Bank in New Canaan, Connecticut is examining AI through the lens of its third-party risk management framework and considers anything related to AI to be high risk. The currently $3.2 billion asset bank's journey into AI – including a Small Business Lending Pilot Using creative AI and experiences in AI-powered sales and marketing, pre-qualification, underwriting and its small business banking unit – “a solid, well-rounded policy outside of it as a high-risk partnership. is hard to put together,” said Chief Innovation Officer Ryan Hildebrand. “We're at the starting line in terms of procedures and policies for using AI, but that's where we're at when it comes to using the products themselves.”

Kim Kirk, chief operations officer of Queensborough National Bank & Trust in Louisville, Georgia, purchased an AI policy template from that she plans to customize. The timing was opportune because the $2 billion-asset bank is buying fraud solutions from its primary supplier that uses machine learning, has hired a machine learning engineer for its business intelligence unit, and cyber risk. The actor wants to solve how to use AI.

He has also dealt with the situation on a case-by-case basis. Earlier this year, Kirk considered allowing his project managers to use for transcription during the bank's core transformation. Because they might contain customer or strategic information, he investigated the transcription functionality and security of's archives, and ultimately was not comfortable moving forward.

As at Merchants & Marines, the judicious use of ChatGPT is on its radar.

Kirk said, “We need to control what our employees do with ChatGPT and make sure they understand that putting any bank information into ChatGPT or other publicly generated AI models. Not appropriate.”

On the subject of third-party risk, which is posed by vendors of bank vendors, Sneff Nanni recommends that banks include a clause in their policies that anything used by a fintech partner is approved. Subject to the same procedures as the Bank will apply. The tools it was using directly.

Looking further ahead, banks should be careful to adapt AI and model risk management policies to generative AI, experts said.

“Banks tend to be more conservative than many other institutions we work with and have very strict validation standards and transparency requirements around models,” said Jay Kumarasamy, senior associate at Luminos.Law. , a law firm founded by both data scientists and Lawyers who focus on AI risk. “Some of these requirements don't work well when dealing with generative AI models.”

For example, a standard model risk management framework will have three lines of defense: robust model development, model validation (ensuring that the model is performing as expected) and internal audit. With generative AI, it is difficult to validate a model by replicating it, due to the time and expense involved in building a generative AI model. Instead, the bank may want to consider other methods, such as red-teaming and evaluating the model against benchmark datasets.

Generative AI models also bring unique risks, such as deception and toxicity, or disrespectful language.

With all types of AI, Perry says the space is moving so quickly that community banks must establish a system framework rather than relying on the policies of their core providers or third-party vendors.

“This issue needs to rise to a priority level so that community banks are not left behind,” he said.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment