
How To Incorporate Data Privacy Into Your Next AI Project
- Posted by GM, Digital Solutions
- On March 17, 2021
As our ability to capture, aggregate, and understand vast amounts of data increases, so too does our desire to put that data to good use with artificial intelligence and machine learning (ML). For most companies, AI projects represent a tangible opportunity to leverage troves of previously unused data to capture unprecedented efficiencies and productivity. But what if all that data isn’t quite as useful as you might think? And what role does privacy and compliance have on the data you’d use for your AI project?
Daitan has helped dozens of companies capitalize on their product or consumer data in order to leverage it into AI algorithms that deliver results. In our work, however, we’ve uncovered a trend that has caused some AI project leaders to lose sleep.
What have we learned? Regulatory compliance can be a huge roadblock that stops AI projects in their tracks.
In this blog, we’ll discuss why getting buy-in from your company’s legal and compliance experts upfront can avoid costly delays – or outright failure – with your upcoming AI project.
Some Data is Off-Limits Due to Data Privacy
Data is the fuel that powers the AI/ML model. Every byte of data that we can feed into an algorithm matures its abilities and strengthens its performance. Just like the neural pathways in our brain that galvanize with repeated experiences, AI/ML algorithms become more efficient and accurate as we feed them more data.
As such, it can be tempting to take your entire applicable data lake and dump it on your algorithm. The challenge with this approach is that some, or perhaps all, of that data may be off-limits.
In today’s digital world, data privacy is a key focus for emerging legislation, especially for personal identifying information (PII). In 2018, the EU rolled out the General Data Protection Regulation (GDPR), considered the gold standard of data privacy legislation, and the effects have reverberated across the globe. Enforcement is on the rise, and even “too big to fail” enterprises are suffering the consequences of non-compliance. Google was fined $56 million for non-compliance in France. That represents 0.4% of their annual revenue, but the maximum penalty is up to 4% of annual revenue. That is far from trivial.
If you intend to use PII in your AI project training data set, or if your algorithm will require PII in production, tread carefully. You’ll need to make sure that the data has been gathered in a legal and ethical manner. This means understanding data privacy legislation across the world, as the digital landscape has no physical borders.
LEARN MORE ABOUT PRIVACY-PRESERVING DATA SHARING FOR DATA SCIENCE
The challenge here isn’t just new legislation, but also changing legislation. According to Gartner, over 65% of the world’s population will have its PII covered under modern privacy regulations, compared to a meager 10% in 2020.
Why does that matter? Consider the timeline for your AI project, both for implementing and monetization. While the PII you’re using now may have been gathered and applied legally, you should consider whether future efforts to collect and use those same datasets will remain legal.
Regulations could cut short the useful life of your AI products, crushing their ROI.
You Need to Build Ethical AI
The world continues to adopt more and more AI technology to solve complex problems. Simultaneously, many grow more and more concerned about “handing over the keys” to machines that lack the emotional intelligence – or at least awareness – of humans. These concerns and ensuing discussions have given life to the concept of Ethical AI.
The idea of ethical AI is simple: Build algorithms whose decisions align with an acceptable moral and legal standard. The challenge, of course, is that this standard is a moving target, with different bullseyes depending on religion, creed, and country, to name a few.
Companies that develop artificial intelligence obviously want to guide their algorithms to make morally and legally correct decisions. However, in the absence of rigorous principles, your goals may need to shift. Instead of ensuring that the most moral or the most legal decision is made, you may need to focus on proving that all care was taken to ensure that an ethical decision was made.
To be clear, adding artificial intelligence to your product portfolio can increase your company’s potential, but it simultaneously increases exposure to risk. Even if your intentions are good, artificial intelligence can make spurious decisions that discriminate – or worse. Many companies are experiencing the fallout of questionable algorithmic determinations, from Goldman Sachs to UnitedHealth to Amazon. These companies and many others are learning that they must prove that the algorithms they build, and the data they use to train them, do not introduce unintended or overlooked consequences.
The more responsibility we give to robots, the more oversight – and training – they require. This is why we recommend that businesses consult with their legal and compliance teams to understand AI ethics. Like legislation around PII, so too are ethical AI laws changing at a rapid clip. The State of Illinois, for example, recently introduced legislation aimed at preventing discrimination against prospective employees who complete video interviews reviewed by AI. Consulting your compliance team will ensure your AI ethics are in line with legal expectations.
Consult with Your Legal Team Before Starting AI
Because of the risks associated with using PII and building “unethical AI” – make sure to consult legal and compliance stakeholders prior to embarking on a new AI project in order to establish best practices and put measurable processes into place to validate your model development and future proof it as your data evolves. Depending on the size of your organization, that could be anyone from the CDO, to the CISO, to HR or Regulatory Affairs, and outside counsel. As you build stakeholders around AI initiatives, compliance and legal should play an active role.
Regardless of who you consult, make sure they understand the goals of your project, the data you intend to use with a clear definition for how it is acquired, and the state of AI legislation as it relates to both PII and non-discrimination in the jurisdictions in which you intend to deploy your AI. If you fail to do so, you might find yourself in trouble. At best, you’ll likely need to significantly rework your algorithm using new data sources and new models. At worst, you might find yourself at the wrong end of a lawsuit.