Beyond PII: Finding and Mapping Derivative Behavior Data

Everyone has had the experience of wondering exactly how in the world a company or organization obtained the information they have about them.  It may be the rapidity with which social media ads matching a product or service you barely more than just thought about are served up in your feeds that causes you to scratch your head in wonder.  Or maybe it’s the “welcome to the neighborhood” junk mail that seems to arrive at your new abode even before you do.  In many cases, this information was derived or inferred from data that easily moves about the ethersphere continuously.  What if you’re not comfortable with others having this information about you, or what if the information is just wrong?  What if your company depends on this information but needs to comply with data protection regulations?

Brilliance Security Magazine interviewed Kristina Bergman, CEO and Founder of Integris Software, to gain a better understanding of what is, or should be, protected information and how companies can manage personal information about their customers.  Integris Software helps organizations meet rigid compliance mandates with the ability to visualize where all personal information is located across the enterprise, prove adherence to regulatory standards, and empower strategic decision making.

Kristina Bergman, CEO and Founder of Integris Software

Kristina explained that this idea of behavior data being personal information was the impetus for her founding Integris Software.  She said, “I’ll tell you that what got me interested in this space, in particular, was the idea that behavior data should be protected as personal information.  I used to work as a venture capitalist and during that time I was looking for a company to invest in that could identify personal information.  Not just what’s considered PII here in North America, like credit card numbers, social security numbers, et cetera, but also what is considered personal information in Europe under the new law that was coming out at that time called GDPR.  Their definitions of personal information were very broad and a bit different than what we have here in the US.  This included behavioral information such as sexual orientation, race, religion, and ethnicity.  If you think about how we define personal information, it’s extremely narrow here in the states, and I think that leaves consumers exposed to all sorts of risks.  It also leaves companies open to potential embarrassment if they get breached exposing the information that they have about people.”

Derivative Behavior Data

To illuminate the types of data that could be used to derive behavior information about individuals, she continued, “Take an example like the Apple Watch or Fitbit.  These devices collect all sorts of non-HIPAA healthcare data about people.  HIPPA’s definition is pretty specific for what’s considered personal information or healthcare data.  But when you think about the apps that are on your watch you see that they can measure and track your heart rate and activity level, among other things.  This information is not necessarily covered under HIPPA but is healthcare information that could be used in all sorts of different ways.  From this data, you could derive how often a person exercises or how frequently they move about.  This can be a good indicator of physical health.  Now let’s say an insurance company bought that data.  That data could then be used to decide what a person’s premiums should be. This information is available today and is potentially available for purchase by anyone who might want to underwrite anything to do with a person’s health.”

“Another example is illustrated by a friend of mine that worked at a company that collected GPS data, consolidated it, and then sent it back out to help their consumers understand traffic patterns. Well, from GPS data being collected from cars you can infer all sorts of things, not the least of which is where people live and work.  You can also infer other behavioral data about them, for instance, if they frequently visit a particular location on a particular day of the week, such as a church or a synagogue or a mosque, you could infer their religion.  There are all sorts of data out there that most people don’t consider personal information, but it could be used to impact decisions about access to specific services or products,” she added.

“There’s a lot out there that could be used to potentially discriminate against people or change their level of service. The thing that really got me interested in the privacy world was the power of data and how much derivative data is out there and the story that it tells us about people. It’s not just your name and date of birth that people should be aware of. They should be aware of all the other derivative data that’s out there, which is data that a company or an organization can infer about you based on other seemingly innocuous bits of data such as your GPS location,” she explained.

Our Right To Be Forgotten – The Solution

When asked how she believes individuals should go about enforcing their right to be forgotten, Kristina said, “Data gets transferred in and out of organizations on a daily basis.  There are data sharing agreements.  There is data purchasing.  Companies are buying streams and feeds of data from social media sites and external vendors.  Acquisitions happen and all sorts of new data enter an organization.  Because of these things, companies really don’t have a good sense of what data actually exists within their data repository.  The biggest blind spot in complying with data protection law is that organizations just don’t know what data they have.  And so that’s where companies like ours come.  We can tell them what they’ve got and more than that, we can tell them what rules apply to it.  We provide companies with the ability to create rules, like a retention rule, for example.  If a company says we don’t want to keep customer records more than seven years after the last purchase of our product or service, our solutions can go and find those customer records that exceed the parameters of the retention rule and kick off an event to remove that data.  With this process, our customers can respond to their customers and say, yes, I have this type of data about you and then kick off a workflow to remove that data if requested to do so.”

“Data protection regulations are hampering many companies ability to continue to use their most valuable asset, which is data.  What we do is we help them get a handle on exactly what data they have, exactly what policies need to be applied, and then we help them automate the remediation of any policy violations within their organization.  This allows them to continue to use their data because they know we’re continuously running, we’re continuously checking, we’re continuously auditing their data repositories.  That means they can continue to use their most valuable asset in ways that will help them grow and fuel their business. They don’t have to go to the extreme of locking down their data and not using it out of fear of violating regulations.  They can continue to use it and continue to innovate, knowing that they have their customer’s permission.  They’re able to respond to customers immediately and they’re able to position themselves as a trusted vendor and a trusted steward of their customer data,” she concluded.

While data about each one of us flows more-and-more freely around the globe; being bought, sold, and traded and an ever-increasing pace, data privacy laws are increasing in number and enforcement power, as well.  Questions about our individual rights to control who knows what about us is being examined from many angles.  This paradigm shift is certainly good for consumers and increases the responsibility that companies and other organizations have to understand exactly what data they have about people, how it is being used, and where it is located – not to mention the responsibility to have the ability to find and remove this data when requested to do so.

We should expect to see a rise in the number and sophistication of technology solutions designed to help companies comply with legal requirements as well as the demands of their customers relative to personal data, whether factual or derived by inference.

Steven Bowcut, CPP, PSP is the Editor in Chief for Brilliance Security Magazine