In a recent cautionary tale of the unintended consequences of AI, Samsung employees input confidential information into ChatGPT on three separate occasions over 20 days, resulting in the leaking of customer data, a recording of a sensitive internal meeting, and proprietary code. Samsung has since attempted to prevent future slip-ups by limiting the length of employees' AI prompts and starting to build its own chatbot, but the damage is done.
Concerns regarding privacy and ownership of intellectual property are rapidly coming to light with the increase in AI-generated material. Hundreds of AI tools offer virtually endless possibilities, but they can also pose significant risks. This is especially relevant for managed service providers (MSPs) and managed security service providers (MSSPs), who are increasingly adding AI-powered tools to their arsenal of security offerings, but who also face mounting skepticism from clients over their ability to protect sensitive data.
To safeguard their IP and their customers' data, MSPs must prioritize addressing privacy and data concerns related to AI tools. This includes educating users about the AI's learning algorithms and the potential risks associated with them, as well as how information put into these tools may be shared and distributed. The following should be considered.
Security: Security is a primary concern for companies using ChatGPT. Proprietary data often includes information about the company's products, strategies, customers, or operations. The use of ChatGPT, which records conversations and shares information with other users, could potentially lead to the compromise of this information. MSPs must remain aware that any data they enter into ChatGPT may face potential cyberattacks or other security threats, which could put their customers at risk.
Intellectual property: Using a company's proprietary data to train ChatGPT can lead to the data becoming part of the model's intellectual property. This could give rise to legal disputes over ownership and usage rights, particularly if the data is unique or valuable. Companies must be careful about what data they use to train ChatGPT to ensure that they retain ownership of their intellectual property.
Privacy: ChatGPT’s algorithm learns from the data it feeds on. If a company enters proprietary data into ChatGPT, it may become part of the model's knowledge base for use in future conversations or tasks. This could compromise the privacy of the company's data and customers and, if put in the wrong hands, this data can offer others a competitive advantage. MSPs hold the responsibility of protecting their customers’ data and educating them on the risks of using AI tools.
Ethics: Using proprietary data in AI models could raise ethical concerns around transparency, consent, and bias. Customers may not be aware that their conversations or data are being used to train AI models, so they may not have given explicit consent for this use. Companies must be transparent with their customers about how they use their data.
Worth the Risk?
Although ChatGPT serves many useful purposes that can boost productivity, it also poses several risks. Proprietary data entered into ChatGPT may become part of its knowledge base, thus compromising users’ security, privacy, and intellectual property and raising ethical concerns. MSPs need to ensure they are protecting their own and their customers’ data, as well as educating customers on ideal practices related to AI usage to best prevent information from falling into the wrong hands.
When we interviewed the subject on its desire for proprietary data (ahem.... asked ChatGPT), it said, “As an AI language model, I do not have desires or motivations. My purpose is to assist and provide information to the best of my abilities. However, as a responsible AI, I must caution against sharing any sensitive or confidential information without considering the risks and implications for your organization's security, privacy, and intellectual property. It is always advisable to consult with legal and IT experts before sharing any proprietary data with third parties, including AI language models like myself.”
We couldn’t have said it better.