AI and Data Privacy: What You Need to Know About the Risks
AI and Data Privacy: What You Need to Know About the Risks
Generative AI, including platforms like ChatGPT, DALL-E, Google Gemini, and Apple Intelligence, has revolutionized our relationship with technology. Perhaps these tools have completely changed the way you work and interact with the internet.
There are countless ways to use these platforms, many of which are called large language models, or LLMs. These chatbots can help with brainstorming, writing, and even coding. But they can also represent significant risks if used carelessly.
One of the biggest concerns? Employees inadvertently exposing sensitive company information. As CISO of IONIC Health and CEO of NESS, I see daily cases where careless use of AI puts data at risk.
The Real Problem
AI models process and store data differently than traditional software. Public AI platforms generally retain entered data for training purposes, which means anything you share can be used to refine future responses. Or worse, be inadvertently exposed to other users.
Think about it: when you paste a code snippet into ChatGPT, when you share a business strategy, when you describe an internal company problem, that data can be stored and used to train the model. And once the data is there, you've lost control over it.
The Main Risks
There are two main risks when entering sensitive data into public AI platforms.
First, exposure of private company data. Proprietary data, such as project details, strategies, software code, and unpublished research, can be retained and influence future AI outputs. This means confidential information could end up being revealed in responses to other users.
Second, confidential customer information. Personal data or customer records should never be entered, as this can lead to privacy violations and legal repercussions. In the healthcare sector, where IONIC Health operates, this is especially critical due to regulations like HIPAA and GDPR.
Many AI platforms allow you to disable the use of what you enter for training data, but you shouldn't rely on this as a definitive solution. Think of AI platforms like social media: if you wouldn't publish it, don't enter it into AI.
Check Before Using AI at Work
Before integrating AI tools into your workflow, follow these critical steps.
Review your company's AI policies. Many organizations now have policies governing AI use. Check if your company allows employees to use AI and under what conditions. If there's no policy, that's a red flag. The company needs to define clear rules.
See if your company has a private AI platform. Many companies, especially large corporations, now have internal AI tools that offer greater security and prevent data from being shared with third-party services. If your company has this, use it. If not, consider suggesting it.
Understand data retention and privacy policies. If you use public AI platforms, review their terms of service to understand how your data is stored and used. Specifically, analyze their data retention and data use policies. It's not fun to read terms of service, but it's necessary.
How to Protect Your Data When Using AI
If you're going to use AI, use it safely. Here are practices that really work.
Stick to secure, company-approved AI tools at work. If your organization offers an internal AI solution, use it instead of public alternatives. If your workplace isn't there yet, consult your supervisor about what you should do. Don't assume it's okay just because everyone does it.
Think before you click. Treat AI interactions like public forums. Don't enter information into a chatbot if you wouldn't share it in a press release or post it on social media. It's a simple rule, but effective.
Use vague or generic inputs. Instead of entering confidential information, use general, non-specific questions as your prompt. For example, instead of pasting real code from your application, describe the problem generically. Instead of sharing real customer data, use fictional examples.
Protect your AI account with strong passwords and MFA. Protect your AI accounts like all others: use a unique, complex, and long password, at least 16 characters. Enable multi-factor authentication, which will add another solid layer of protection. Compromised AI accounts can expose entire conversation history.
Real Cases I've Seen
Let me share some real cases I've seen, anonymized, to illustrate the risks.
A developer pasted complete source code from a critical application into ChatGPT to ask for help with a bug. The code contained hardcoded credentials and proprietary business logic. Months later, similar information appeared in ChatGPT responses to other users. We can't prove it was from that code, but the coincidence is suspicious.
A marketing team used ChatGPT to generate content, sharing product launch strategies, target audience, and even information about competitors. When the product was launched, a competitor had too much information about the strategy. Again, we can't prove it, but the timing is suspicious.
An HR professional used ChatGPT to review internal policies, sharing details about salary structure, benefits, and hiring processes. Information that should have been confidential was entered into a public platform.
These cases show that the risk is real, not theoretical.
Increase Your AI IQ
Generative AI is powerful. But you are wise. Use AI intelligently, especially when sensitive data is involved.
By being careful about what you share, following company policies, and prioritizing security, you can benefit from AI without putting your company at risk. AI is an incredible tool, but like any powerful tool, it needs to be used with wisdom and care.
Remember: when it comes to sensitive data, when in doubt, don't share. It's better to be cautious and protect confidential information than to discover later that it has been exposed.
Final Reflection
As a security professional, I see AI as a dual tool. On one hand, it can increase productivity and efficiency in incredible ways. On the other, it introduces new risk vectors that we need to understand and manage.
The key is to balance benefits with risks. Use AI, but use it consciously. Take advantage of the power of technology, but protect the data entrusted to you.
And if you're responsible for policies in an organization, don't wait. Create clear policies on AI use now. Train your team. Provide secure tools. The cost of not doing this can be very high.
Want to discuss AI security or need guidance on AI use policies in your organization?
Connect with me on LinkedIn and let's exchange experiences.
Ricardo Esper is CEO of NESS Processos e Tecnologia (since 1991), CISO of IONIC Health, and CEO of forense.io. Certified CCISO and CEHIv8, he is an active member of HackerOne, OWASP, and the Privacy and Data Protection Commission of OAB SP.