Is Generative AI Secure Enough for Banking?


By Devin Partida, Editor-in-Chief, ReHack.com

Generative AI allows convenient and conversational banking by giving users access to various products and services. It uses machine learning to understand queries and execute commands using natural human language.

Banks use this technology to offer new and advanced services to their customers on a wide scale. AI helps financial institutions provide consistent service, information and support to many geographical locations.

AI also helps banks deal with customer service by solving issues and answering questions 24/7. These systems can handle basic operations and requests using prompts and answer human language queries. AI can give human customer service representatives more time to focus on complex issues and problems.

Some popular uses of AI in financial services include:

  • Chatbots
  • Fraud detection
  • Financial analysis
  • Portfolio management
  • Loan score calculation
  • Financial report generation
  • Financial forecasting and advising

AI Threats to Finance

AI is a revolutionary technology. It continuously learns and adapts to the needs of its users and holds great potential for other applications in various fields. The finance sector is no stranger to using AI in its daily operations.

The National Institute of Standards and Technology (NIST) announced the forming of a public working group tackling AI and the risks that come with it. The public working group aims to assist organizations by building a framework addressing the special risks associated with generative AI.

Trustworthiness continues to be an issue with generative AI, even in highly regulated industries like finance. Cyber risk management programs should be implemented daily to mitigate such risks. These programs should also be capable of assessing risks and counter evolving threats.

Some of these threats are:

Prompt Injection

This strategy involves creating input data that causes the generative AI to execute additional harmful instructions beyond the analysis it was doing. Like jailbreaking, a hacker tries to persuade a chatbot into doing something it would not otherwise do. 

It goes around the AI’s safeguards and could cause it to execute additional malicious code if the input is improperly secured. Prompt injection can be introduced to the system through malicious text or code in documents the AI uses to learn and develop.

Hackers use this method on user input like chat windows or search bars. Another technique includes uploading documents with extraneous text invisible to human readers, which an unsecured AI will inadvertently process.

Data Poisoning

Data poisoning can compromise AI models during creation or updates. Generative AI needs millions of documents to create an understanding of texts, images or audio. What happens if the AI receives multiple sources filled with false information?

The AI will generate incorrect results, and financial institutions run the risk of errors in their generated responses. Criminals who use this method pump out false data on the internet. As the AI scrapes the web for information, it can use inaccurate data to generate results, damaging users and banking institutions.

Other malicious actors can also use AI to exploit system vulnerabilities as an effort multiplier. The arrival of AI also increases the system’s complexity, increasing the number of unknown risks. Institutions and regulators should check these threats to create a more secure system and manage the associated risks.

Limitations of AI in Banking

One of the challenges associated with using AI in the banking industry is workforce adaptation. Apart from restructuring job roles, adding new expertise in AI roles and creating upskilling programs, financial institutions eager to adopt AI and integrate it into their operations must train their employees to work with the tool and understand its limitations.

Some limitations of AI in banking include:

Data Quality

AI relies on massive amounts of data to learn and understand prompts. Financial institutions must feed AI correct and valuable data like banking terminologies and queries and ensure the AI understands what its clients are trying to communicate. 

Training data should be accurate, extensive and relevant to ensure AI performs well and as intended. Constant training and testing are required to ensure generative AI meets customer needs and hits satisfaction standards.

Bias

Generative AI can be limited in terms of producing accurate results. As stated, training generative AI relies on enormous amounts of data. When an AI’s training data is insufficient, it can produce erroneous or biased results, causing failed transactions, declined applications and other damage. These outcomes can negatively impact the reputation of the financial institution.

Financial institutions must remove any traces of toxicity, bias and erroneous information in their training data. This step is essential when training AI and preparing it for future requests and prompts.

Privacy and Security

Banks and other financial institutions use their clients’ data when processing transactions. They may also use this data to train machine learning algorithms for various functions. Banks should be vigilant when handling their client’s data to promote privacy and security.

Certain safeguards must be in place to prevent malicious actors’ data theft or unauthorized access. Some financial institutions filter their user’s personally identifiable information (PII) to ensure they aren’t storing any user data after generating responses with their AI.

Filtering can help avoid possible data breaches, which could put users at risk. Masking PII and deleting it after prompt generation are suitable safeguards and promote proper handling of private data.

Accuracy

Generative AI can create answers to questions and other prompts in seconds. However, it’s still far from being able to do complicated calculations and may produce incorrect answers when prompted. 

Banks should enforce safeguards like checking and verifying results via human intervention to ensure all information produced is error-free. AI should also be trained to detect problematic results from generated prompts.

Final checks done by human personnel will ensure the AI only gives correct information to users and prevent costly mistakes and misunderstandings.

Balancing Risks and Benefits of AI Finance

Using AI has its merits and challenges in all industries. As the finance sector understands its uses and risks more, it should put significant thought into mitigating the risks of generative AI. Organizations relying on AI must understand that perfect systems are far from reality and should be careful when dealing with such a complex technology.


Devin Partida is an industrial tech writer and the Editor-in-Chief of ReHack.com, a digital magazine for all things technology, big data, cryptocurrency, and more. To read more from Devin, please check out the site.

.

.


Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information.