In today’s world of rampant hyperbole and the excessive use of buzzwords, it is often a good idea to define exactly how you are using some terms. This is especially true when talking about Artificial Intelligence, Machine Learning, or Deep Learning. More than any others in recent memory, these terms are so flung about that one might think their use is somehow magical. Vendors of every security solution imaginable now claim the employment of one or more of these technologies.
It is not the intention of this article to define these terms, nor to differentiate between them. Our purpose here is to propose a list of questions that purchasing decision makers should ask their potential security vendors about their use of AI. For this article, we will use the expression AI to define all technology that enables a solution to perform tasks that normally require human intelligence, such as visual perception, speech recognition, and decision-making.
Image from NVIDIA Blog, What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?
These questions were developed by John Omernik, distinguished technologist at MapR. John is a recognized expert in detecting security threats and preventing fraud using data analytics. While John now works for a data platform provider, his roots are in the financial industry where he often sat on the other side of the desk from vendors. He was emphatic in our discussion that it is not his intent to throw any solution provider under the bus, rather he wants to help open the conversation so decision makers can understand exactly what a proposed product is and does. He notes, “I purposefully didn’t provide my version of the correct answer to these questions. I think it is important that each evaluation team compare the answers provided by their short-listed vendor teams and then decide which answers make sense for them.”
Question 1. Understand the technical components of AI in the product. Sometimes a product can use simple classification algorithms on a single type of data, and based on that, make huge claims about the inclusion of AI. Getting the vendor talking about the implementation allows you to assess whether it’s a point AI solution or a way to bring AI to security data in a more comprehensive way.
Question 2. Ask about the flexibility of the AI models. Does the vendor claim to use a proprietary model that will solve “all the problems?” Can this model be altered by the customer? Can different models all work on the same data; or can your data only be worked on by the models bundled with the security product? Everyone’s enterprise is different, and that includes their security needs. There is no one size fits all.
Question 3. Ask about the application of AI models. Can models be applied to different data sets? Can log data, audio data (i.e. phone recordings), video data (i.e. security cameras), and other sources of data (transactional data, for example) all be worked on? And if so, can these data sets work together or must they be independent? Applying AI to data can be great, but an organization’s data stretches across data silos, and if AI can only work on certain silos, something is likely missing.
Question 4. How will new AI approaches be incorporated into the solution? Can the vendor describe how this process works? Can the vendor provide examples of when past AI was incorporated into the solution and how that development, testing, implementation and licensing played out? The last component, licensing, is critical: Was an organization’s data held hostage and kept away from new AI until a fee was paid to apply the algorithm? This isn’t 100% bad. For instance, if a new AI was developed by the vendor it makes sense. But, if they just implemented someone else’s algorithm on the data when the licensing fee was paid, then that’s something an infosec practitioner will want to know.
Question 5. Does the product advance the security team’s data knowledge and skills? Does the platform allow security practitioners to apply the latest AI toolkits? Does the tool help practitioners learn about how data works and help them grow their understanding of data engineering and data science as it pertains to the organization’s data? Or, is the solution a black box in which their organization is forced to rely on the expertise of a vendor to solve security problems? A balance must be struck between working with vendors and growing an internal talent pool. A product that allows growth will serve the organization better.
John acknowledges that it is not likely that any one person on the vendor’s team will have the answers to all of these questions, nor is any one person on the evaluation team likely to have the technical knowledge required to assess the vendor’s response. He believes that these questions will open the conversation and it should be a team effort, on both sides, to provide relevant information and evaluate how a solution touting the use of AI can help an organization use data more intelligently.
By: Steven Bowcut, CPP, PSP, Brilliance Security Magazine Editor-in-Chief.