By Amod Gupta, Senior Director, Product Management, Traceable AI
ChatGPT, the generative artificial intelligence (AI) chatbot from OpenAI, is the fastest-growing consumer application in history with 100 million users just two months after launch. Consumers and enterprise teams alike are intrigued by this intelligent chatbot’s ability to streamline searches and provide knowledgeable answers with one or more prompts. As a result, there’s a race to develop new services around ChatGPT and similar tools. Companies want to leverage generative AI for a wide array of use cases, improving worker productivity and enhancing the customer experience with new digital services.
However, this race is coming at the expense of enterprise security.
Typically, new technology product launches go through extensive security testing before being launched. Innovative companies embed DevSecOps processes into software development, so that security is considered throughout production, rather than bolted on as an afterthought. (Sadly, only 30 percent of organizations have fully implemented DevSecOps, a best practice that should be standard.) Yet, ChatGPT has no such guardrails.
Already, individuals have input confidential and sensitive information into the ChatGPT interface, giving enterprise security teams a taste of what is to come when they launch similar services. Leaders of companies in highly regulated industries, such as financial services, healthcare, and retail, are right to be worried about data leaks, which can expose them to regulatory censure and fines. However, other risks are also hiding in plain sight.
Two Generative AI Risks Firms Should Be Mitigating
First, generative AI tools, such as chatbots, virtual assistants, and intelligent search engines. reflect the data they’re trained on. Thus, they can provide biased information that could tarnish companies’ brands, such as offering inappropriate political, racial, or social commentary. Companies deploying these tools will need to filter this content out, before it reaches employees and customers.
Second, data that’s input into generative tools like ChatGPT travels over application programming interfaces (APIs). Public-facing generative AI tools will often use partner APIs to query backend company systems, while internal chatbots use internal APIs to communicate with internal systems. Companies will need to implement controls to ensure that sensitive data can’t be input to these systems, such as account codes, social security numbers, corporate strategy information, and more.
In addition, teams will also need to ensure that queries that return data from backend systems, such as customer relationship management (CRM), enterprise resource planning (ERP), and supply chain management (SCM) systems is protected as it is travels over partner APIs, as it is at risk for exfiltration. Attackers, who are opportunists, will gladly pivot from launching challenging attacks on well-defended data stores, to targeting data in transit, which often has weak or nonexistent protections.
Don’t Wait for Government Regulations to Catch Up
Advanced technology often outpaces governmental regulations, which then begin to catch up when gross violations or major security breaches occur. Witness cryptocurrency, which was launched back in 2009, but has been lightly regulated until the recent FTX debacle revealed a shocking lack of governance at an industry leader.
As a result, enterprise leaders and teams need to move forward to protect their own data and businesses and not wait for governmental regulations to be implemented, which could take years. They can do so by taking these steps:
- Scrutinize partner security programs: Enterprise leaders should do due diligence on partners’ security programs, policies, practices, and controls, to ensure they meet their requirements. It’s better to conduct a thorough risk assessment, then rush to innovate and be faced with cleaning up data privacy and compliance violations afterwards.
- Secure all connections: Enterprise teams should secure their own APIs and ensure that their partners do likewise. APIs can often be a weak link for companies, as many lurk in the shadows or operate as zombies, maintaining data connections even though they’re no longer used. Teams can leverage API security programs to discover all APIs, wherever they live; gain enterprise-class protection against attacks; and instantly find and remediate threats across their entire API ecosystem.
- Implement DevSecOps processes: Since more than 70 percent of organizations don’t fully use DevSecOps processes, now is a great time to accelerate their implementation. The pace of change to the IT environment will surely accelerate with generative AI, creating confusion about security processes and guardrails and increasing the complexity of the IT environment. Already, IT and security professionals blame a third of misconfigurations (33 percent) on flawed internal guidance – or a lack of it. These issues are sure to increase with generative AI.
Gaining greater familiarity with DevSecOps frameworks, processes, and intensive cross-functional collaboration now will ensure that development and security teams can innovate at pace, while still complying with enterprise data privacy and security requirements.
Innovate Safely by Protecting Data Now
ChatGPT and other generative AI tools will transform business as we know it. However, enterprise teams need to move now to protect their sensitive data from being exposed, as it traverses APIs to answer user queries.
By taking these three steps – scrutinizing partner security programs, securing all connections, and implementing DevSecOps processes – enterprises can lead with secure generative AI innovation. They can then deploy new capabilities that employees and customers love, removing friction from routine work processes and digital interactions.
Amod Gupta is Senior Director, Product Management at Traceable AI an industry leading API security and observability company where he oversees and helped build the company’s API Catalog solution that provides API discovery and risk assessment by automatically and continuously discovering all APIs, identifying sensitive data flows, and assessing API risk exposure to manage API-related security threats.