AI is becoming increasingly mainstream as potential business use cases increase almost daily. We’re only seeing the technology in its infancy, and it’s already changing how we work.
That said, AI—specifically generative AI—is still in its infancy. From a legal and security perspective, it is largely untested. At best, we’d call it a little murky. At worst, the indiscriminate use of AI could have dangerous long-term impacts on your business. Until countries around the world solidify their positions and create a legal framework for the industry, we just don’t know.
Now, that doesn’t mean businesses shouldn’t use generative AI. The application of AI is a nuanced topic that isn’t going away any time soon, and there are significant capabilities in AI that can bring a lot of value to businesses in every market sector.
But diving headfirst into this untested tech may expose your business to unnecessary risk, and companies must evaluate the AI landscape in relation to their own individual risk tolerance and data security concerns.
Not all AI is created equal
Artificial Intelligence (AI) has been around for decades. Depending on how you define AI, you can trace it as far back as 1956. At its core, AI is just the use of technology that enables computers to mimic cognitive functions and solve complex problems. It can include your phone’s GPS, Apple’s digital assistant Siri, or the autocorrect on your last text message.
We’re not quite at iRobot—yet.
Today, most of the conversations around AI and the explosive growth in technology concerning machine learning (ML), a subset of AI that enables a system to learn and improve based on its experience, allowing it to improve and change over time. The more data involved in ML, the better the system becomes.
Even machine learning in and of itself is not even that new of a concept when it comes to AI.
Now, the use of machine learning for generative AI—that’s what has taken the world by storm.
What is generative AI?
Generative AI is a type of program built to produce outputs based on prompts from a user. Generative AI is called “generative” because it doesn’t stick to a set of pre-established answers; instead, it generates responses based on accessible data, its own “experience,” and the user’s input.
At the moment, common uses for generative AI include coding, content creation, drafting documents, and customer support. And the possibilities are growing by the day. For content creation alone, users can use specific models to generate copy, video, art, and even audio files.
The most well-known generative AI is ChatGPT (it’s also the fastest-growing consumer application in history). ChatGPT is a “language model” operated by OpenAI, and is designed to operate as a conversation between the human input and program output.
While ChatGPT is lauded for its user experience and ability to generate human-like content, its business implications can be, well, murky.
ChatGPT and other generative AI models are developing much faster than any accompanying government oversight or regulations—and that includes privacy regulations. And with appropriate caution, using it can lead to complex ethical and legal risks.
Why businesses should exercise caution with AI
From a business perspective, there are several categories of risk when it comes to the use of generative AI, from copyright and intellectual property to data privacy.
There are a number of potential risk factors for businesses by both using the output of generative AI models, as well as exposing their own data to these models.
Remember, most generative AI models, including ChatGPT, do not guarantee data privacy. For businesses whose contracts are designed to ensure privacy or confidentiality for clients, this can be particularly crucial. If a business enters any information into a chatbot, that AI may use that data through the machine learning process and in ways that businesses can’t predict, and could violate any confidentiality contracts between the business and its clients.
What data do generative AI models collect?
While many AI models state that they don’t use information provided by users, that doesn’t mean that they don’t collect any data. For example, ChatGPT collects IP addresses, browser types and settings, and uses cookies to collect the user’s browsing activities over time. It can also share all of this information with its vendors or third parties, including law enforcement officials, without notice.
This illustrates the importances of carefully considering features and settings within any given AI tool. Businesses should establish processes for evaluating them and assessing whether they default opt in or out and how that aligns with your privacy needs.
Legislative reactions to generative AI and data privacy
While the United States doesn’t have specific laws targeting the use of generative AI for data privacy, there are a number of considerations that businesses should take into account.
First, while the United States may not have an overarching federal data privacy law, a growing number of U.S. states do have comprehensive data privacy and consumer protection acts. In some states, such as California, users have the right to rescind consent and require any entities to forget that user’s data. Businesses are also legally required to notify any party with access to that person’s data of their rescinded consent. That third-party business must also “forget” that person’s data.
Here’s the catch: removing data from software like ChatGPT isn’t a simple matter of clicking “delete,” which means that businesses may need to implement additional safeguards and processes if they’re leveraging these tools. If they don’t, businesses that are subject to GDPR regulations may face long-term consequences for how they use AI bots.
Now, on a federal level, the U.S. government is taking steps regarding the potential parameters of “responsible AI”. It’s very reasonable to assume that in the next couple of years, there will be legislation on the use and regulation of generative AI. As such, companies that grow to rely on AI models might find themselves legally liable for their business practices—while also dependent on a defunct business model.
The European Union’s Artificial Intelligence Act
A number of countries in Europe, including France and Spain, have been investigating OpenAI for data breaches and other complaints.
The EU is currently considering the Artificial Intelligence Act (AI Act), which is proposed legislation to strengthen regulations around data quality, transparency, and accountability. This includes a ban on the use of AI in biometric surveillance and requiring disclosure of AI-generated content.
EU lawmakers added additional amendments to the legislation in June 2023, which could pass before the end of the year, though it may not come into effect for a few more years. If passed, this act would apply not only to businesses headquartered inside the EU but also to anyone who deploys AI systems in the EU.
The AI Act could have far-reaching consequences for businesses worldwide, impacting how businesses can use AI technology without violating privacy regulations. It could also influence other regulations like it did with GDPR and the privacy regulation wave in the US.
AI models try to pass any liability onto the user
While many data privacy issues are still very much undecided regarding generative AI, the platforms themselves have thoroughly declared their own limited legal liability through the use of their models.
OpenAI’s Terms and Conditions (under section 7.A) specify that anyone using their software will hold OpenAI and any affiliates harmless from any claims, losses, or expenses due to any content or services you develop using their services, even if your business violates any applicable law.
While AI models are attempting to pass on liability, many of them have become the subject of legal complaints, and this will most likely continue. Depending on the results of these ongoing suits, companies that have used AI to generate products or services may find themselves increasingly at risk due to those creations.
To minimize your risk exposure, businesses concerned about data privacy can protect themselves by setting clear rules around:
- How they evaluate tools for use
- Who should use them
- What information can be used in AI tools
- How risks will be managed
- And more
AI privacy risks and what they mean for businesses
Over the next decade, businesses will most likely increase their use of AI, barring any substantial government legislation. While laws may still be catching up to AI, there are privacy risks that businesses can identify and mitigate today.
First and foremost, businesses should wait to clear AI uses for their company until they understand the privacy risks associated with those AI models.
Ethical considerations
As new and complicated as AI is, it’s no surprise that there are a pile of ethical considerations to factor in, especially where privacy is concerned.
Some ethical issues fall on the business side. For example, if your business provides data to an AI software, the AI can use that information to create models that your competitors may be able to access and learn from. And let’s not forget about copyright—tools like ChatGPT are trained on large volumes of internet data, some of which may include copyrighted material. If you rely on that material for business purposes, you could risk copyright infringement.
Another issue is that of bias. That’s a fascinating, complex question that we can’t really do justice to here. But the short version of this is that, despites the efforts of engineers, AI can produce systematically biased results.
No matter the industry you’re in, this can be highly problematic. It can skew data sets and impact algorithms meant to support monitoring and decision-making (think hiring or or law enforcement software).
Consumer trust must be prioritized
Beyond corporate liability, the use of AI can also erode consumer trust, especially when it comes to data privacy. AI models are sometimes called “black boxes” because we don’t know exactly how they come up with their output or what information they use to create it. Because there’s a lot we still don’t know about generative AI, some consumers are naturally wary of businesses that rely on it.
And if you experience a consumer data breach due to the use of AI, you would face losing your customer base and experience significant financial hardship (the average cost of a data breach in 2023 is $4.45 million).
Employee training is more important than ever
Even if your legal team understands the nuance of using AI in the workplace, not every employee will inherently understand why they shouldn’t use ChatGPT to help them with a specific work task.
Companies that want to optimize their data privacy in the face of AI should be integrating additional employee training for both seasoned staff and new hires to ensure everyone understands the policy around the use of generative AI models. Clear corporate policies around how AI is used, what’s okay to use it for, use cases, and how to evaluate vendors can also provide valuable support.
Data security implications
Just as with every piece of technology in history, there will be people who use AI models with good intent and others who use them unscrupulously. As AI models become increasingly sophisticated, this could expose companies to a greater risk of data breaches and security threats.
This means if your business is weighing whether to add an AI vendor to your roster, it should be treated like any other vendor and undergo a thorough security vetting.
The potential evolution of phishing schemes
Consumers are already seeing an increase in phishing voice calls, where an AI model can recreate a person’s voice from social media or public videos in fake kidnapping and ransom calls.
With the use of AI audio models, it’s easy to see how phishing schemes could become more sophisticated. AI audio productions may be able to crawl public videos, such as social media content, to create a voice message that imitates your CEO. Instead of sketchy emails, employees may have to look out for voice messages of their bosses calling and asking for “help” by going to a random URL.
Because of the realistic, compelling attempts phishing schemes can make, it’s more important than ever to provide regular training on the most current threats. Not sure how to update your training programs? Working with a specialist can help you establish adaptable training policies.
Data protection and vulnerabilities
Hackers may use AI to test and improve malware to make it more effective against your company’s data security system. That said, companies may also be able to use AI to identify threats and vulnerabilities in their security system.
Google was also able to optimize its data storage system by using AI to monitor their data center processes such as backup power, cooling filters, power consumption, etc. and found it to increase their energy savings over time.
Employee data privacy protections and AI
Generative AI is a major concern for data privacy, but it isn’t the only type of AI that could expose your business to risk.
California’s Privacy Rights Act (CPRA) added employee data protection to the state’s existing Consumer Privacy Act (CCPA). Under the CCPA/CPRA, consumers, employees, and job applicants have a number of protections regarding the use of their personal information, such as the right to opt out of having their personal information used in profiling or automated decision-making.
If your business employs anyone residing in the EU, your company is also subject to the EU’s General Data Protection Regulation (GDPR). Under the GDPR, employees have the right to object if they determine that some of their data is being processed for the wrong reason, and the employer would have to cease the processing for that specific reason.
In other words, if your company uses AI models to profile potential job applicants or current employees or uses employee data in any AI models that are unrelated to their employment, you could be in violation of state regulations and subject to civil penalties. The concern around this is already showing up in legal form; NYC’s Department of Consumer and Worker Protection recently saw legislation go into effect to prevent bias in automated hiring tools.
The best course of action: evaluate any potential (or current) hiring tools for measures to protect against bias.
Data privacy, AI, and the court of public opinion
Even if your data privacy policy and use of AI models pass legal muster, it’s vital to keep a critical eye on industry developments. AI’s ability to boost efficiency and expand capabilities is invaluable, but as we’ve seen, the landscape can shift quickly. Planning for contingencies can help safeguard against missteps that could damage your reputation with consumers.
If a business or organization uses AI models that cast doubt on its use of employee or consumer data, it could have long-lasting impacts on the health of your business. It’s not just your sales that could suffer; businesses with improper data privacy policies may find it more difficult to attract top applicants in their field.
And because machine learning is such a highly scrutinized technology, even the perception of privacy violations through AI could harm your business.
The bottom line
There are many fantastic use cases and capabilities for AI in every market sector. It has the power to provide real advantages in any number of business operations. This technology is only just beginning, and there are many possibilities in the near term.
However, when it comes to the privacy risks for your business’s sensitive data, there are many unknowns. Businesses leveraging AI should plan to track issues and have plans in place to mitigate risk. Every business will have to make its own decisions regarding balancing the risk and reward of AI models. But no matter what you decide, it can’t hurt to proceed with caution.